you edited your comment. it said what's the speed up on GPUs (which i provided).
>GPUs are not nondeterministic Turing machines
for small problem size that's exactly what they are (or any multicore cpu for that matter)
>The other is to imagine that the machine "branches" into many copies, each of which follows one of the possible transitions. Whereas a DTM has a single "computation path" that it follows, an NTM has a "computation tree".
An NTM is one which can run arbitrarily many branches in parallel. So parallel processors are not NTMs, since they can only run a fixed number of branches in parallel.
It's true that for small problems they are indistinguishable, but in the context of discussing big O notation that is irrelevant.
For the purposes of computing asymptotic time complexity, whether the algorithm is run on a 1 core system or an M core system is usually irrelevant.
you're completely missing the point (again). the point is that complexities (even asymptotic) of sequential algorithms don't apply to parallelizations of those algorithms.
http://www.ijcee.org/vol9/949-E1621.pdf
you edited your comment. it said what's the speed up on GPUs (which i provided).
>GPUs are not nondeterministic Turing machines
for small problem size that's exactly what they are (or any multicore cpu for that matter)
>The other is to imagine that the machine "branches" into many copies, each of which follows one of the possible transitions. Whereas a DTM has a single "computation path" that it follows, an NTM has a "computation tree".
https://en.wikipedia.org/wiki/Nondeterministic_Turing_machin...