Hacker News new | past | comments | ask | show | jobs | submit login

I'm responding to your second example simply because it's easy to argue about. I'd say that both proofs that you have presented are equivalent ways of saying that "since when you sum all the numbers from 1 to N you obtain a number that's N(N+1)/2, therefore, it is true that the sum of numbers from 1 to N is N(N+1)/2".

Now, this argument may appear trite but do consider that both of your proofs essentially do the same thing with the first one summing the numbers from extremities and the second one summing 1...N-1 first and then the last. I'd argue that if addition were not commutative, you may have obtained different results.




If two programs are equivalent, you can typically show that they're equivalent with a sequence of small refactorings. Replace `x + x` with `2 * x`. Inline that function call. Etc.

Can you do that with these two proofs? What's a proof that's halfway in between the two?

If you can get from one proof to the other with small "refactorings", then I agree that they're fundamentally the same. If you can't---if there's an insurmountable gap that you need to leap across to transform one into the other---then I'd call them fundamentally different. If you insist that two proofs are "essentially the same thing" despite having this uncrossable gap between them, then I suspect you're defining "essentially the same" to mean "proves the same thing", which is a stupid definition because it makes all proofs the same by fiat, and avoids the interesting question.


But can you not? Assume sum, 1..(N+1)/2..N

Have you not actually built the same proof via induction in both cases with one of them starting from the middle and subsequently including left and right terms at a unit away (you actually do it in reverse but the crux still holds)? Such that S[0] gives you (N+1)/2 and S[(N-1)/2] gives you the total sum.

The argument would be like S[i] = (2i-1)(N+1)/2 only you'd be proving it using induction i.e. given S[i-1], finding S[i].

All it ever matters for this problem is that to prove it as such, you somehow have to add up all of the numbers. The "different" proofs you presented are actually the same since for addition, the order of operation does not matter due to associative and commutative properties. A good question would be to see if any of the proofs still remain valid when either of these properties are removed from an operation.


You're handwaving, but I think there is a middle-ground in this proof:

Sum(i=1..n, i)

= Sum(i=1..n/2, i) + Sum(i=1..n/2, n+1-i)

= Sum(i=1..n/2, n+1)

I'm still interested in the general question, of whether some proofs have big gaps between them. The more complex the proofs, the more obvious this would be; my examples are unfortunately simple. Something like proving the fundamental theorem of algebra using Rouche's Theorem (complex analysis) vs. field theory. But I don't know enough math to compare those.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: