Not only is it not constant time, it's not even polynomial - it's psuedo-polynomial. Also it'll fail on negative numbers, right? You'll need something like `10000 * log(time + min(time) + 1)` to be linear in the bits used to represent the inputs.
> In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the numeric value of the input (the largest integer present in the input)—but not necessarily in the length of the input (the number of bits required to represent it), which is the case for polynomial time algorithms.
> Not only it not constant time, it's not even it's polynomial
You understand that this is part of the joke, right?
If we really want to get down to the details and kill the joke, then you don't actually need to wait real time. Computational complexity is concerned with steps in a computational model, not how much time passes on a clock. Sleep sort uses OS scheduler properties and in a virtual time environment, time advances to the next scheduled event. That's what brings you back to actual polynomial complexity, if you assume this kind of thing as your computational model.
> - it's psuedo-polynomial.
If you lecture people then please at least get your spelling right.
I still don't get it. Sleep sort still needs O(n) Haskell-steps, while for example sorting in bash is just 2 bash-steps, calling it, and receiving the output.
I fail to see the joke, really. I only see false and nonsense statements, which still could be funny or interesting, but I don't see how?
The "constant time" is wall clock time, where it will be at least the biggest value times the constant microseconds plus like 5% for overhead like the garbage collector.
Complexity analysis deals with asymptotic cases where N gets arbitrarily large. Sleepsort would take the same wall time to sort [1,2] and [1,2,1,2] - but it would presumably take somewhat longer to sort an array of 1e10 ones and twos, because it's not a constant time algorithm.
On the joke part, sleepsort is intrinsically an extremely funny concept and I think everyone here gets that. But "constant time" has a rigorous/pedantic definition, which sleepsort doesn't meet, so I think for some readers calling it that kills the joke (in the same sort of way that it would kill the joke if TFA's code snippets used lots of invalid syntax).
I'd never seen sleepsort before, so I thought it was funny. ;-)
I like the idea of (ab)using the scheduler to sort numbers.
Now I'm inspired to make my own "sorting" routine, maybe touching filenames and then sorting them with `ls -n`, or rendering triangles and then ripping them off in z-plane order, or stuffing frames into a video and then playing it back.
Surely any PP problem can be done in P, it's just a matter of encoding? As in if you give me a box that computes whatever in PP, then I can give you back a box that takes a single input consisting of 1s to the length of each value delimited by 0s, turns those back into integers (linear), uses your PP box, and returns its output - but now it's polynomial in the length of my input?
(I said integers but I don't think that's significant, just a matter of encoding scheme - we can use a single 0 for decimal point and two for delimiting inputs say.)
But anyway isn't the joke that sleep-time doesn't count, because the computer could be doing something else? It's actually quite compelling in a 'this is stupid, I love it' sort of way.
You can indeed make the distinction between polynomial and pseudo-polynomial stop existing by enforcing the inputs are in unary. But you haven't made anything faster, you've just made almost all of them worse.
> In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the numeric value of the input (the largest integer present in the input)—but not necessarily in the length of the input (the number of bits required to represent it), which is the case for polynomial time algorithms.
https://en.m.wikipedia.org/wiki/Pseudo-polynomial_time