> Computing f(0)=0; f(n)=f(n-1) is O(n log n) without tail calls because you need O(log n) addresses to hold your stack frames.
There are two principal ways of applying asymptotic analysis to algorithms: time or memory used. In both, your procedure is O(n) without TCO. With TCO it is O(n) for runtime (though further optimization would reduce it to O(1) since it's just the constant function 0, but TCO alone doesn't get us there) and O(1) for space since it would reuse the same stack frame.
What O(log n) addresses do you need to hold the stack frames when there are O(n) stack frames needing O(n) addresses (without TCO, which, again, reduces it to O(1) for memory)?
Also, regarding "without tail calls", your example already has tail calls. What do you mean by that?
If we don't treat almost all integers in an algorithm as fixed size then the analysis gets really messy and annoying in a way that has nothing to do with real computers.
And if the algorithm actually did anything with the value that made it grow or shrink with the recursion, the TCO version would stop being O(n) under such a framework. This only works because it's passing 0 around every iteration. And this probably already applies to the TCO version's flow control depending on how you expect to run it.
Regardless, the comment I replied to is fundamentally confused (presented a tail recursive algorithm and said it didn't have tail calls, presented a linear algorithm that uses a linear amount of memory and claims it's O(n log n) for some reason but no clarification if it's in time or space). I'd rather hear from the person I responded to than whatever guesses the rest of us can come up with because it needs several points of clarification to be understood.
> If we don't treat almost all integers in an algorithm as fixed size then the analysis gets really messy and annoying in a way that has nothing to do with real computers.
I guess nobody does numerical computing or distributed systems with real computers anymore, huh.
Wrong way around. Almost all the numbers in those systems are fixed-size. There's a few types of variable width payload, but all the little ancillary types are 64 bits or whatever.
> There are two principal ways of applying asymptotic analysis to algorithms: time or memory used.
It is usually both, but I meant time, because to be able to address your stack frame you need* a stack pointer that can take as many distinct values as your nesting depth, so it must have o(log n) width.
It is easy to dismiss this as irrelevant because your integers are usually fixed-width, but then you'd need to parameterize your algorithm on the size of input you're willing to handle (at which point you're no longer doing asymptotic analysis), since arbitrary-precision arithmetic really does just work this way normally.
> Also, regarding "without tail calls", your example already has tail calls. What do you mean by that?
I mean "without tail call optimization", or if you're particularly keen on getting into a pedant-off, "with the function call in the tail position not implemented as a tail call".
There are two principal ways of applying asymptotic analysis to algorithms: time or memory used. In both, your procedure is O(n) without TCO. With TCO it is O(n) for runtime (though further optimization would reduce it to O(1) since it's just the constant function 0, but TCO alone doesn't get us there) and O(1) for space since it would reuse the same stack frame.
What O(log n) addresses do you need to hold the stack frames when there are O(n) stack frames needing O(n) addresses (without TCO, which, again, reduces it to O(1) for memory)?
Also, regarding "without tail calls", your example already has tail calls. What do you mean by that?