the comment you are replying to explains that already, right above the code
the question of memory access time is quite deep
in shane's example of flood-filling a 2048 × 2048 image in 16ms, the image is 16 mebibytes; if you have less ram than that, so that you have to page in and out (as your comment says you do), you are not going to be able to do the computation fast enough. but my hand-me-down cellphone has 256 times that much ram so i guess you're using a pentium pro 200 or something, in which case you aren't going to be able to run a browser that supports canvas
assuming you're using a computer from the current millennium, one which has enough ram to run a browser that supports canvas, you won't need to do any paging, but you do need to think about dram bandwidth. probably the required bandwidth to dram is 2 gigabytes per second (one read and one write), while typical current ddr4 systems provide on the order of 30 or 40 gigabytes per second. that assumes you can keep the memory access reasonably sequential (if your l2 cache is less than those 16 mebibytes). this is usually the case with a simple line-by-line flood-fill algorithm, like the one used by gw-basic 40 years ago, but there are pathological images where it is not
consider a 1-pixel-black/1-pixel-white square double spiral that fills the full 4 mebipixels. whether a fill in the white covers all the white or just half of it depends on every single black pixel, so algorithms that can do this with better locality and a reasonable amount of parallelism are inherently going to be relatively hairy
but images like that cause problems for shane's existing algorithm too because they will make shane's web worker spin for a long time
the question of memory access time is quite deep
in shane's example of flood-filling a 2048 × 2048 image in 16ms, the image is 16 mebibytes; if you have less ram than that, so that you have to page in and out (as your comment says you do), you are not going to be able to do the computation fast enough. but my hand-me-down cellphone has 256 times that much ram so i guess you're using a pentium pro 200 or something, in which case you aren't going to be able to run a browser that supports canvas
assuming you're using a computer from the current millennium, one which has enough ram to run a browser that supports canvas, you won't need to do any paging, but you do need to think about dram bandwidth. probably the required bandwidth to dram is 2 gigabytes per second (one read and one write), while typical current ddr4 systems provide on the order of 30 or 40 gigabytes per second. that assumes you can keep the memory access reasonably sequential (if your l2 cache is less than those 16 mebibytes). this is usually the case with a simple line-by-line flood-fill algorithm, like the one used by gw-basic 40 years ago, but there are pathological images where it is not
consider a 1-pixel-black/1-pixel-white square double spiral that fills the full 4 mebipixels. whether a fill in the white covers all the white or just half of it depends on every single black pixel, so algorithms that can do this with better locality and a reasonable amount of parallelism are inherently going to be relatively hairy
but images like that cause problems for shane's existing algorithm too because they will make shane's web worker spin for a long time