Hacker News new | past | comments | ask | show | jobs | submit login
Defining zero factorial (johndcook.com)
35 points by tacon on April 11, 2015 | hide | past | favorite | 34 comments



How would you find (n-1)! given n!? (Think for a moment before moving on.)

Dividing by n of course, (n-1)!=n!/n. This immediately extends the notion of factorials backwards to 0!=1!/1.

It's only a matter of checking to see which factorial-dependent formula (usually arising in context of combinatorics, or gamma function) fits - and it turns out (luckily) that everything fits.

Edit: Even if it tuned out some things don't fit, we would still be free to define 0! as 1 as a convention. But the fact that everything fits, makes it a useful definition - prevents you from checking a lot of edge cases, eliminating the need of many ``if (k==0) ...`` from codes.


> How would you find (n-1)! given n!?

If you want to think a little more:

You are given a value v, which you are told is n! for some n, which you are not told. Find a general algorithm to produce (n-1)!.

I have an algorithm which, I fear, may not have the most efficient running time.


If you have n! in binary, count the number of zeros at the end of its binary expansion and call this count k. The number k+1 approximates n relatively closely (the error n-k is the number of ones in n's binary expansion—see http://www.cut-the-knot.org/blue/LegendresTheorem.shtml).


By that logic, (-1)! = 0!/0 = 1/0 = infinity, no?


Well, nobody has ever said that 1/0 will get you infinity, because it won't. Interpreted directly, it's an illegal operation, and there is no result because the computation is impossible. Sure enough, the gamma function doesn't exist at nonpositive integers.

But wait! If you interpret 1/0 in a limit sense... there is no result, because there's no constraint that the neighborhood of 0 is entirely positive or entirely negative (obviously, the neighborhood of 1 is entirely positive). You'd only get an answer of "infinity" if you were limiting a function like "1 / 0^2".

Sure enough, we see that the gamma function has no limit at the nonpositive integers, always approaching positive infinity from one side and negative infinity from the other side. So the factorial of a negative integer doesn't exist. What point are you trying to make?


> Sure enough, we see that the gamma function has no limit at > the nonpositive integers, always approaching positive > infinity from one side and negative infinity from the other side.

On the complex plane, there is more than one side to approach from. And the convention that was used when I learned complex analysis (stereographic projection), is that there is only one value of infinity, which can be approached from many directions. Under this model, the point at infinity is much better-behaved than the points at infinity in real analysis.

The Gamma function is characterized as having poles at the negative integers. That's a very straightforward description, and the behavior is well-understood and relatively easy to deal with. Certainly easier to deal with than the value of e^-x as x->0 (that singularity is essential, rather than a pole).


Yep, it sure does. The magnitude of Gamma(x) grows without bound as x approaches any negative real integer. The negative integers (and the point at infinity) are also the only points where Gamma(x) has a singularity.


For a second there it looked like you were saying that (n-1) does not equal n!/n.


The way I know P!=NP is that N=(P-1)! Of course P=NP if P=0 or N=1. So who knows. :)


In school it was explained through combinatorics, where the factorial among others tells, in how many ways you can order your set. And an empty set has just one, but it has one, order in which it can be arranged. Now that I am writing this, I'm losing confidence, that this explanation is mathematically correct..


That is the usual explanation, but I don't think that helps much since counting the number of ways to arrange nothing is confusing!


Not as confusing as you'd think, though, if you invoke the rule of product.

The number of ways to do A and then B is the number of ways to do A times the number of ways to do B. The number of ways to do nothing and then A should be the same as the number of ways to do A. This implies that the number of ways to do nothing is the multiplicative identity, one.


Isn't it just a special case of the idea that the product of an empty sequence is 1?


Almost, but not quite. You could say "n! is the product of the natural numbers <= n." By this definition 0! is an empty product because there are no natural numbers <= 0.

However, this same reasoning would say (-1)! = 1, (-2)! = 1, etc. And while you could define negative factorials that way, it's better in practice to leave negative factorials undefined.


This is really a different issue. You'd be extending the domain of factorial beyond what it's traditionally been used for (same as if you extended it to include other numbers outside the natural). That you can do it for some sets (gamma function) doesn't matter, because it also stands on its own. And if you extend it, it's not the factorial function anymore, which is defined to have the non-negative integers as its domain.

But conceptually the primary use case in combinatorics is for permutations, i.e. $n!=|S_n|$, which only makes sense for $n \ge 0$. You could even make an argument for n!= because $|S_0|$ should be 1 on its own (because there's exactly one permutation of the empty set) without saying that it makes formulas easier. (This, I note again, matters regardless of what the gamma function does.)


That's just because we need to define the set more gracefully: "n! is the product of the first n natural numbers".

The negative factorial issue arises from a slightly awkward specification of the set, not from jameshart's core idea.


That same reasoning would also say that π! is 6, which is not really a useful result, so we are probably best looking for a different definition.

There's also some controversy about whether "there are no natural numbers <= 0"...


Could you then modify the definition to "for all n >= 0, n! is the product of natural numbers <= n"? That sounds kind of like the definition "f(0) = 1; f(n) = (n-1)! * n, n > 0".


I tried to fit a simpler function to the (logarithm of) factorial function with symbolic regression: http://i.imgur.com/9UJCDxT.png?1 They all seem to fit it fairly well, and they all suggest that 0! should be close to one.

This is hardly rigorous mathematics, but I find it interesting that it should converge to the same prediction, when I gave it no bias that it should be that way.


Interesting! What software is that?


It's Eureqa: http://www.nutonian.com/products/eureqa/ which is fantastic for what it does.


For fun you can try out non-integral factorial values. For example, type in (1/2)! into WolframAlpha or Google, and you'll find the answer is the square root of pi over 2.

This is due to the gamma function extension: https://www.youtube.com/watch?v=QhDDpSju3uY


See also Emil Artin's elegant pamphlet, "The Gamma Function".


It's awesome and weird that there's a consistent and useful function that lets you define an equivalent to (-1/2)! -- and that what you get is the square root of pi.


Of course it has pi in it. Otherwise it'd have to have e in it.


I agree. The Stirling formula is another place where the square root of pi pops out from factorials.


The much simpler answer is that this is the special case of the empty product being 1 (or more precisely, the neutral element of the monoid on which your multiplication operates).

The empty product is 1 so that $\prod_{x \in S \setminus T}x\prod_{x \in T}x = \prod_{x \in S}x$ holds for all $T \subseteq S$.

It's the exact same reason why the empty sum is zero (zero being the neutral element of a monoid using additive notation).

n! is commonly defined as the product of all positive integers less than or equal to n. For n = 0, this is the product of the empty set.


I thought this video explained it very well: https://www.youtube.com/watch?v=Mfk_L4Nx2ZI


0! Is whatever makes sense in context, like any other defintion.

Arguing about what it should be "in general" is treating math like reality television, and it makes you dumber.

O! Isn't a big controversy, but the idiocy surrounding O^0 (which is usefully different in combinatorics vs in analysis) is mind-boggling.


Defining 0^0 = 1 is just as important in analysis as it is in combinatorics, even though analysis texts aren’t always clear that they’re making use of this fact.

• If p(x) = ∑[n=0..∞] a_n x^n is a power series, then p(0) = a_0 is its constant term, but it makes no sense to write this unless 0^0 = 1.

• The power rule d/dx [x^n] = n x^(n−1) holds for n = 1 and x = 0, but this requires 0^0 = 1.

The reason that some people believe it’s important to undefine 0^0 in analysis is that the limiting expression

lim[x → a] f(x)^g(x)

does not necessarily exist when lim[x → a] f(x) = lim[x → a] g(x) = 0. But all this means is that we have to draw a distinction between the _value_ 0^0, which equals 1, and the indeterminate _limiting form_ 0^0, which is an abbreviation for the above type of limit.

It is not uncommon for values to evaluate differently from the corresponding limiting forms. For example, the value floor(0) equals 0, but the limiting form floor(0) is indeterminate. It may seem surprising that such a discrepancy arises for exponentiation, but all it means is that exponentiation is discontinuous at (0, 0), as it must be.

(Note however that the above limit _does_ exist with mild conditions on f and g: if f, g are complex analytic functions with f not identically zero, then the limit equals 1.)

See also Donald Knuth’s _Two notes on notation_: http://arxiv.org/abs/math/9205211.


Defining 0^0=1 has issues. Example: a(n)=1/exp(n), b(n)=1/n. Then both a and b converge to 0, but a^b is always exp(-1) (and hence does not converge to 1).


What you have shown is that the _limiting form_ 0^0 is not always equal to 1. The _value_ 0^0 is still equal to 1.

Similarly, floor(−1/n) converges to −1, which tells us that the _limiting form_ floor(0) is not always equal to 0; but the _value_ floor(0) is still equal to 0. Nobody uses this to argue that the value of floor(0) should be undefined or context-dependent.


Because floor is a discontinuous function.

But you are right, what I meant is there is no way to define 0^0 maintaining continuity of the power function. Why is this important? Because power is a continuous function otherwise.


Similarly, there is no way to define floor(n) for integers n maintaining continuity of the floor function, even though floor is a continuous function otherwise. We still define floor(n) = n because the meaning of the floor function is more important than its continuity. And so it is with exponentiation at (0, 0).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: