Hacker News new | past | comments | ask | show | jobs | submit login

like the other comment alludes to it's not constant it's a function of block size and the size of the matrices

http://www.ijcee.org/vol9/949-E1621.pdf

you edited your comment. it said what's the speed up on GPUs (which i provided).

>GPUs are not nondeterministic Turing machines

for small problem size that's exactly what they are (or any multicore cpu for that matter)

>The other is to imagine that the machine "branches" into many copies, each of which follows one of the possible transitions. Whereas a DTM has a single "computation path" that it follows, an NTM has a "computation tree".

https://en.wikipedia.org/wiki/Nondeterministic_Turing_machin...




An NTM is one which can run arbitrarily many branches in parallel. So parallel processors are not NTMs, since they can only run a fixed number of branches in parallel.

It's true that for small problems they are indistinguishable, but in the context of discussing big O notation that is irrelevant.

For the purposes of computing asymptotic time complexity, whether the algorithm is run on a 1 core system or an M core system is usually irrelevant.


> for small problem size that's exactly what they are

O notation does not apply to small problems. It is strictly concerned with asymptotic behavior.


you're completely missing the point (again). the point is that complexities (even asymptotic) of sequential algorithms don't apply to parallelizations of those algorithms.


Only for infinite number of parallelized processors?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: