> The idea was to build something fun that never shows adverts to them or tricks them into sneaky purchases by “accident”.
This part really resonated with me. Google promotes the absolute worst spammy shit these days. I have to browse tech forums to find clean simple games.
Last week I was trying to find a simple memory cards game and the search results were complete dog shit so I started building my own memory cards game. In a couple hours I made https://memorycardsgame.com/. Then browsing a svelte forum I found https://toddler-games.com which is much further ahead of mine but I'd never be able to find stuff like this by searching for it. Of course none of the good games are filled with a bunch of SEO spam which is kind of the point.
Since Steve Jobs didn't let his kid use an iPad we've decided the game isn't needed now.
An unsuspected (to me!) place for good kid friendly games, free of ads and in-app purchases or other predatory patterns, is Apple Arcade on iPad.
My 5 y/o is having great fun with the Crayola app and Cooking Mama, and, honestly, between each round, I'm waiting for an ad to pop, a nag screen to pay for stuff or review, a countdown or some other crap we're accustomed to on mobile games and no. Nothing. Just the full game and nothing else. It's a refreshing experience.
Thanks - that toddler-games.com is really nice. It’s so simply done that even a two year old could enjoy it for hours. My kids are 5 and 7 now so want something a bit more advanced, but I wish I’d found that a few years ago
Its a great read. It bothered me a little that there was an ad at the bottom of the page when it started with a passage about avoiding ads. But i share the sentiment and am especially triggered when those ads are in front of my kids. I'm adult and got over that feeling quickly and it is a terrific post.
If you're referring to my blog, sorry yeah I use the free Wordpress plan and they stick those ads there for themselves :-(. The app has no ads though, which is important when it's for kids
Very nice! A bit of feedback: when you quickly click many cards, they are open at the same time, which makes it trivial to see pairs. So maybe you want to wait opening any cards until the last pair is closed.
Nice! Small bit of feedback: when you click “restart” after winning, the emojis are replaced with new ones right before the flip animation plays, which briefly reveals the solution. I’m assuming that’s a bug and not an intentional hint, as it doesn’t happen the first time.
I moved to Vivaldi browser on mobile which has a built in ads/trackers blocker (how is that not a built in feature in any browser? Oh wait) and said goodbye to ads forever (ublock on PC ofc).
A few weeks before I made the move I started getting goatse style shock sites in google ads, some kind of leg disease, horrible stuff.
I was looking for colouring app two days ago but couldn't find a decent one. https://toddler-games.com seems to be something my niece's would love. Thanks.
Author here: I'm loving all the suggestions of ways that this might be achievable in either a faster or simpler method! The code is open source at https://github.com/shaneosullivan/example-canvas-fill and I'd love to see you hackers improve on it and share it with everyone. Any and all improvements I'll happily use going forward in my app.
I guess you are doing the same thing and memoizing the graph ahead of time. Perhaps find an optimized implementation in wasm and save yourself some maintenance time.
The fill algorithm is far from being properly optimized:
1. Use pointIdx instead of a string for fill keys. That's going to be way faster and memory efficient
2. There's no need to have separate added and visited. Just keep added and continue checking it before adding to the queue, and don't delete added entries
3. Even better, don't even use added but instead just check the output image data
4. Compute pointIdx incrementally by adding or subtracting 1 or width, only do a multiplication for the first point
5. Don't keep y coordinates in the loop, instead store the minimum and maximum pointIdx and then divide at the end to find the y coordinates (you still need to keep x to compute the x minimum and maximum if you need it, unless you can make the image have a power of two stride, in which case you can just mask pointIdx to compute x)
6. Use a black border instead of doing boundary checks
7. The % 5000 check is horribly slow since division is horribly slow. Use a power-of-two size and a bitwise and to be safe
8. Doing an update every 5000 pixels seems absurdly low. CPUs do billions of operations per second and a pixel should not take more than 100 CPU cycles with proper optimization, so even at around 100 fps you should be able to do in the magnitude of 100K pixels per frame, which means you shouldn't even need to do it incrementally or pre-process. If you really need incrementality you should use both a pixel number check and a timer and adaptively double or halve the pixel number if the timer check gives a too low or high time elapsed.
9. Using "chunks of pixel" is probably faster than single pixels and using horizontal groups of chunks even better, although that requires significantly more code
10. WebAssembly, with SIMD if available, is going to be faster than JavaScript
In general, this code has clearly not been written by someone experienced in writing optimized code, so I assume the pre-processing can also be greatly optimized (if it's even necessary after optimizing the filling code).
Also, there is no need to limit to 256 areas since you can use multiple images and all channels to store larger integers.
I wonder, if assembling an array of paths for outlines may improve things. You can check for a point being in inside path, use paths as hit areas, and you can apply fills. Notably, there wouldn't be any need for keeping various mask images in memory anymore. (You wouldn't want any curves in this paths, just pixel outlines. It may become a bit tricky with complex paths and negative shapes, though, which may be addressed by composing a mask image on-the-fly.)
a potential drawback is that it seems like the paths could be several times larger than the original image (consider a checkerboard pattern, where each pixel is its own fill area, or the pathological spiral pattern i suggested in a comment below)
Assuming you currently recompute the whole thing on every edit, it might be fairly straightforward to speed it up by reusing the results of the previous run. Areas which do not overlap the edited area shouldn't need to be recalculated. Bounding boxes would be a quick way to test overlap.
Having checked the website https://kidzfun.art, it looks like you compute the masks once, and don't recalculate anything upon edits.
If you would like to dynamically adjust the masks, the following algorithm should be O(sqrt(N)) lines of JavaScript executed for realistic cases:
Split the image up into monochromatic regions of at most 1000-4000 pixels (you want to regions to be roundish rather than long and stringy, so use breadth first search to create them). Then keep track of a graph of these regions with edges between adjacent regions. For a 2000x2000 pixel image, the graph should contain ~4000 nodes, so finding a monochromatic component would not take an appreciable amount of time.
Updating the graph after a fill operation is straightforward - just relabel the color associated with each changed region. The only point I'm uncertain of is whether canvas operations are fast enough that each region can have its own mask and up to 4000 draws as described in the blog post don't take too long. The individual masks wouldn't be very big, and the total number of pixels to be colored isn't any different to the blog post, so it may be possible.
If something is drawn with the pen tool, then every region it touches would need to be recalculated. Assuming the pen tool is circular and not substantially larger than 1000 pixels itself, this should be a fast operation as it won't overlap too many large regions and there are few enough regions we can iterate through all of them every frame for a bounding box test.
There are cases where this goes wrong - for example if the regions end up being very stretched. If there are 1000 1-pixel wide vertical lines, and the drawing tool allows a 1000 pixel horizontal line to be drawn in 1 frame, the algorithm described looks at a million pixels (possibly 4 times each as it checks for adjacency), which might be too slow. However, since this is aimed at a touchscreen, I'd be impressed if your children can and want to draw accurately enough to cause a problem. Further optimisations could work around this problem (and other problems such as having lots of 1-pixel regions that would bloat the graph).
devit's suggestions are also worth incorporating.
Since this algorithm only becomes interesting with edits, it's not really something I could implement as a pull request to the GH repo.
You could do instant flood-fill on a slow PC in the early 90s. There is no need to precompute this. The possible issues are:
1. You're using a slow algorithm. This is almost certainly the case, looking at how slowly it run and at the order in which pixels get painted. The Wikipedia page on flood fill is enough to find a good algorithm.
2. Possibly, the overhead of individual canvas operations is high, so setting each pixel at once is slow. If this is the case, it would be partially ameliorated by using a span-based algorithm, but you could also run the algorithm on an offscreen byte array and then blit the result to canvas.
I would bet money that a 90s state-of-the-art algorithm running in JavaScript on an offscreen array will be perceived as instantaneous on a modern computer.
Note that you loose 1-2 orders of magnitude just by the screen resolution (in the early nineties we used 640x480, sometimes even 320x200 vs 4K nowadays). Another order of magnitude is lost by using a more high level programming language, which sure you could probably optimize down to factor 2-3. Plus sublinear scaling of stuff like memory latency since the 90s. All in all I agree we should be able to do better, but it is by no means trivial.
If you are right that would be amazing! The demo code is open source, please send a PR with such a flood fill implemented.
The requirement is that when the user clicks on a 2k x 2k image it takes 50ms or less, running on a cheap Android tablet. If a better algorithm, perhaps implemented in wasm, can achieve this, I’d love to discard all my complex code and just use it
I love that everybody commenting here here emphatically insists that you must be doing it wrong, but real-world solutions are nowhere to be found. I admire your commitment to keeping a positive tone, but you'd be justified in feeling some frustration.
I do hope a PR shows up in the next few days so the internet can stop having this debate and move on with a good library.
There are fast segment based fill algorithms from the seventies or very early eighties in the literature and reprinted in the first volume of Graphics Gems. They work very well and do not require the call stack or amount of computation of simpler methods.
Ha, painting tablet app feels like a rite of passage for programmer parents. Here's mine from that era https://paints.netlify.app/
Where it got really interesting later was when one of my kids asked how I made it, how it worked, how they could add new features. So it grew a bunch of adorable haphazard hacks that my oldest implemented.
Very cute! I started out with basically this, and my kids kept asking me for more and more features :-). Recently they asked to make animations, and that took a chunk of time to get right. I'm looking forward to whatever they ask for as they grow more advanced!
Ha, guilty (https://safari-sketch.com)! Was also annoyed by an iPad game my son played that tried to extract money out of him every play session. It served as a nice excuse to learn some technologies I wouldn't otherwise play with.
That’s really essential, even if only for the clumsy thumb left on the screen with a naive grip. Being able to finger paint with all ten fingers is a bonus.
Yes, ignoring the palm resting on the screen is essential. Doing that along with allowing zooming and pen input takes some creative thinking. It’s non trivial
> Ha, painting tablet app feels like a rite of passage for programmer parents.
Only the first parents should have had to make it. The fact that one of these good home grown ones is not at the top of any search engine is a failure of the internet.
When it comes to performance, I find you really need to experiment with HTML Canvas. Some of the interfaces can be hundreds to thousands of times slower for manipulating the canvas than others.
It's not just the Canvas API that surprises, it's also how browser JS engines choose to implement those APIs. What works well in Chrome can often cause grief in Firefox or Safari. Working with canvas elements today often feels like frontend development back in the 2000s
the summary is that they preprocess the image to partition it into a disjoint set of fillable regions, but i feel like this maybe has to be redone whenever you mutate the image
maybe other optimization approaches would yield a flood-fill algorithm that just runs fast enough in js; wasm can surely do it
like i feel like you can probably loop over the pixels in a scan line until you encounter an obstacle at several tens of megapixels per second
as the dumbest possible test of js perf, this code (admittedly not accessing an imagedata object) gets about 250 million iterations per second on my 2.6 gigahertz palmtop in firefox
function tri(n) { let x = 0, y = n; while(y--) x += y; return x }
s = new Date(); console.log(tri(10_000_000)); console.log(new Date() - s)
A proper flood fill needs to iterate over an enormous ImageData object (or at the very least a huge 2d array) and page in/out memory as appropriate. Your code appears to just increment a variable. This is not a valid comparison.
the comment you are replying to explains that already, right above the code
the question of memory access time is quite deep
in shane's example of flood-filling a 2048 × 2048 image in 16ms, the image is 16 mebibytes; if you have less ram than that, so that you have to page in and out (as your comment says you do), you are not going to be able to do the computation fast enough. but my hand-me-down cellphone has 256 times that much ram so i guess you're using a pentium pro 200 or something, in which case you aren't going to be able to run a browser that supports canvas
assuming you're using a computer from the current millennium, one which has enough ram to run a browser that supports canvas, you won't need to do any paging, but you do need to think about dram bandwidth. probably the required bandwidth to dram is 2 gigabytes per second (one read and one write), while typical current ddr4 systems provide on the order of 30 or 40 gigabytes per second. that assumes you can keep the memory access reasonably sequential (if your l2 cache is less than those 16 mebibytes). this is usually the case with a simple line-by-line flood-fill algorithm, like the one used by gw-basic 40 years ago, but there are pathological images where it is not
consider a 1-pixel-black/1-pixel-white square double spiral that fills the full 4 mebipixels. whether a fill in the white covers all the white or just half of it depends on every single black pixel, so algorithms that can do this with better locality and a reasonable amount of parallelism are inherently going to be relatively hairy
but images like that cause problems for shane's existing algorithm too because they will make shane's web worker spin for a long time
If a wasm algorithm can run in under 16ms for arbitrarily large images that would be amazing. Hard to beat pre-computing all possible fills however, as the user perceives it as instant.
If you’ve seen a really fast wasm solution I’d love to replace the fill portion with it though.
I think people in this thread are being much too optimistic about browser performance. Writing a loop isn't the issue; memory access patterns are. Canvas implementations are just not well suited for the flood fill problem space. (I spent some time with flood fills + canvas ten years ago when I was working on the now-dead library Literally Canvas, but I assume that experience is pretty far out of date now.)
Precomputing fill areas is a really good idea for the situation you're in!
actually on further examination i think each four iterations of the loop paint one pixel in the most common case, because each pixel in a large filled area gets visited from each of its neighbors
btw, i want to point out that shaneos's request for an 'algorithm [that] can run in under 16ms for arbitrarily large images' is obviously constructed to be impossible
no flood-fill algorithm can run in less than 16ms, or less than any finite time, for arbitrarily large images; they necessarily require at least linear work and, even with arbitrary parallelism (itself impossible in a finite universe), logarithmic time
you need to bound the image size if you are going to bound the execution time
so this is not a good faith request; it is, rather, a deliberately impossible task, like spinning straw into gold, finding an acre of land between the salt water and the beach, making a cambric shirt with no seams or needlework, or reaping a crop with a sickle of leather
unfortunately my previous comment pointing this out has been flagged and is therefore generally no longer visible; i consider this sort of response to a purely logical argument of impossibility to be astoundingly mean-spirited and contemptible
No, JS and Canvas aren't the issue, there; the code you wrote will perform terribly in any language on any platform. You used a four-way recursive per-pixel fill algorithm. You should probably do a bit of reading on flood fill algorithms (maybe read the Wikipedia flood fill article).
it isn't recursive in the sense of a function calling itself, just in the mathematical sense of using its own previous output as input
i agree that it's easy to better its performance substantially, but i wouldn't go so far as to say 'perform terribly', except in the sense that it's far from optimal. there are applications where it would be adequate
There are situations where bubble-sort is adequate, too, but it's still not a good idea to use it or claim that it performing badly reflects on the implementation language. The code uses an explicit stack in place of recursion, sure, but that's not particularly salient; it's just recursion without stack overflow.
The code allocates data structures at least eight times per pixel, and tests pixels at least four times. It will perform badly in all languages and so cannot be used as a reliable indication of JS/Canvas performance.
agreed, except that the reason it's never a good idea to use bubble sort is that insertion sort is just as simple and always performs much better; there are cases where insertion sort really is the right thing to use despite its usually quadratic performance
there might even be a case where the single-loop sort is the right thing to use, despite its abominable performance, because it's simpler than bubble sort or insertion sort; i think it's 12 amd64 instructions, 40 bytes
for (i = 1; i < n; i++)
if (a[i-1] > a[i]) t = a[i], a[i] = a[i-1], a[i-1] = t, i = 0;
You're right, the code is not performant at all. At the time I was hacking around and just happy to get something working. I think the code is a tempting distraction, but I was trying to say that browser API performance is not always what you would expect and sometimes you need to get creative in order to get the results you want.
okay but i think we aren't being too optimistic about browser performance, memory access patterns aren't the problem, and canvas implementations are adequately well suited for the flood fill problem space, which directly contradict three assertions you did actually say, even if they weren't what you were trying to say
You can probably make the process extremely fast by replacing the flood-fill approach with a something based on a Connected-Component Labeling algorithm ( https://en.wikipedia.org/wiki/Connected-component_labeling ). There's a good amount of literature on the subject and it's often included in computer vision libraries (e.g. OpenCV).
You'd first threshold the image by checking if a pixel is 'close enough' to the clicked pixel. This will create a B&W image. Then you call the CCL on this binary image to produce a labelled image: each pixel will contain the id of its component (i.e. which enclosed space it belongs to). Once you have this, it's just a matter of replacing colors on the canvas for pixels with the same id as the clicked pixels.
Obviously, that's just the 'theory' part. If you can find a good library that includes one then you'll have to integrate it. Otherwise, you'll have to re-implement that in Javascript, which is easier said than done and may also not reach the desired performance.
Thanks, I’ll take a look at the article. This generally sounds very similar to my pre processing algorithm, setting the id of the enclosing area into each pixel so a fast/instant operation can be applied. Am I missing something though? In my solution, the only thing that happens when the user clicks is a fillRect() call. Is your suggestion similarly fast on click?
I think the main difference is that the CCL would compute 'enclosed areas' on the fly. To be more specific, it would first create a mask (black & white) of pixels that are similar to the clicked pixel, and then find connections between foreground/white pixels.
I don't work in web development but accelerating CCL algorithms do happen to be my area of expertise. While their execution time depends on the image content, you can expect them to be in the tens of milliseconds for a 4K image, at least for the 'modern' one you can find in OpenCV, and on current consumer-level CPUs, without multi-threading.
Of course, implementing these newer algorithms isn't straightforward. That's why you should instead use something like OpenCV if you have the option to.
Moreover, if CCL doesn't fit what you're looking for then you can also check out watershed algorithms ( https://en.wikipedia.org/wiki/Watershed_%28image_processing%... ). Because they don't work on binary image, they might be a better way to describe what you call 'enclosed space'.
One visible difference would be that if there are two identically-colored overlapping objects, the article's method will treat them as two separate areas, whereas the algorithm mentioned above will treat them as one.
There are faster fill algorithms than the above, described on Wikipedia, that don't need you to visit the whole image [1]. In particular, span-based filling.
Not trolling. I haven’t played with wasm at all. Perhaps if it can fill a 2k x 2k pixel image super fast it would be indistinguishable from my solution. If someone had coded this and open sourced it I’d love to use it
probably fill rates of 240 megapixels per second are going to be hard to reach consistently in javascript but 60 (1k × 1k) seems likely achievable without much in the way of tricks
I've tried writing a fast flood fill in JS and it's a lot more challenging than you and GP make it out to be. If it's really so straightforward, you should produce code. Otherwise, these comments don't add much to the discussion.
i'd be surprised if someone could do it within the editing window for an hn comment, but it doesn't seem like more than a day or two of work for double-digit megapixel fill rates
I wasn’t calling per pixel :-) even when just visiting each pixel once it would still take 50 - 100ms to fill a large image which the user perceives as lag. I wanted it to be instant
Surprised "jump flooding" hasn't been mentioned yet, which is the common technique to accelerate this sort of thing (used for color fills, signed distance field construction, outlines, etc.).
Jump-flooding is inexact and can jump over significant edges. The runtimes claimed are also incorrect - as it sets every pixel, it is O(N^2) in the image dimensions, just like flood-filling. If you're working with an image and setting pixels, you cannot avoid being O(N^2) in the dimensions of the filled area.
Jump-flooding is an algorithm suited for GPUs where you run operations for all pixels "simultaneously" which is why the complexity is often given in number of iterations and not number of operations. Naive flood fill fits this model badly and a GPU implementation would have a complexity of O(N^3) as you expand your fill region border by only one pixel width each iteration so you will need O(N) iterations for sufficiently large regions but you need to look at ALL pixels every time. You could force a CPU-like computation where you keep a stack of visited pixels to expand out from but that would actually end up slower for all reasonable sizes on the GPU unless your regions are much smaller than the image.
You can also modify the algorithm to not jump over edges if that matters to you, sacrificing its advantage in edge cases where there are thin connections between regions.
I've played with and implemented a range of area fill algorithms in the past. Optimized span-filling (dating back to the 1990's) works very well, is fast, buffer friendly and delivers great results. Subtleties surface when you have to consider filling around dithered edges and partially transparent elements. Fur stuff.
Why does it need separate mask images? Couldn't you use the indices in the alpha layer to do the flood-fill by looping over every pixel in the image and changing the colour if the index matches? It would save memory, right? That per-pixel loop should be fast enough, since there's just one pass and it's simple enough any JS JIT ought to be able to optimise it.
The speed is achieved by using the fillRect() command with globalCompositeOperation. This is a single draw operation rather than a per pixel algorithm while the user waits. Anything more involved than this would, I think, be slower as perceived by the user
Makes me wonder if this could be even faster, if you cheated and just generated copies of the masks already colored, for every color available. From looking at the video, I see you have 10-11 colors available at any given moment with possibility of changing them to, presumably, arbitrary RGB value. That means you only need to keep 11 color variants of each mask, which should still fit in RAM. You also need to be able to replace a single group of masks with a new set of masks faster than the user can switch colors in the custom color picker.
(I know, it's a dumb idea in this case; I'm posting it as a reminder to both myself and everyone else, that many problems can be solved by checking or caching every possible state.)
Sure, but at any given moment, they can use only ~12 of them (+ the rainbow thing, now that I played a bit and know how it works). Meaning, you can keep 12 pre-filled variants of each mask, one for each color on the main palette, and recolor all masks of a given color when the user picks a new one from the top-left color picker. I bet you can do this update faster than they can dismiss the color picker and tap on the canvas.
I see you have drawing tools there as well, but they seem to be ignored by flood filling (e.g. if I draw a closed circular border myself and use flood-fill on the empty centre, the fill will just paint over my circle border and continue filling to the boundaries of the initial region of the original image) - so they don't even make a counterpoint I was worrying they would.
It seems that doing this per-pixel loop on the CPU in JS is slow, but it's not too bad: the first click might take 300ms, subsequent clicks are just 100ms. While this is about ten times slower than I was hoping, it's still fast enough to seem responsive to most users. This is also at 4K resolution, because I wanted a realistic worst-case. It's much faster at 1080p since it has four times less pixels to work with.
Very nice - yes that's quite quick and definitely uses less memory. When I tested it on an older iPad and on an Android device the difference in speed between the two approaches is a bit more obvious. Watching my kids bash away at this and expect it to keep up makes me more willing to use extra memory to do so. All that said though, I really like your implementation, it would likely be the right choice for a lot of use cases
Thanks! By the way, I realised just now I was leaving a bit of performance on the table: getImageData() is slow, but I don't need to do it on click, I can do it just once after loading the image. Changing that gets it to be consistently under 100ms for clicks after the first one.
This is very clever! I found out about the canvas composite modes far too late in an animated ASCII art project.
IMO, the canvas APIs are a little too esoteric for their own good. It’s almost like a stateful object, but all the commands are executed as side effects. Isn’t front end fun?
Please have a look at my project and let me know what you think! The source code is fully annotated for another brave soul who’s working with the dark arts of high performance canvas rendering.
I tried the usual route using getImageData() and am getting between 50 ms for a small region and 100 ms for the full 2048 x 2048 canvas on a 8th generation mobile i5 in Chrome. In Firefox it is 50 ms and 150 ms, all on Windows 10. So the overhead alone of getting the image data from the canvas and writing it back can eat up most of the entire 50 ms budget if not exceed it. Or maybe I did it not in the proper way, I am totally not a web developer. There is also certainly some room for improvement in my implementation of the core algorithm. But unless there are more tricks than specifying willReadFrequently, it seem hard to get below 50 ms.
The naive method is slow because it repaints the image several times. A faster way is a sort of double buffering: copy the image to an array, perform the fill on the array, copy the array to the canvas image data.
The method in TFA is however faster for static images, after the preprocessing.
There’s a trade off between speed and showing progress to the user. I find it better to show slow progress rather than wait 500 - 1000ms to do it faster, but it depends on the use case
For a typical users use case, I bet you could downscale the image 16x (using the GPU, and therefore very fast), flood fill on the much smaller image (using slow javascript over each pixel), then expand the image up to the original size and reprocess just the border pixels (and there aren't many border pixels compared to the total filled area).
This method might not produce perfect results when the paint can 'escape' through a tiny gap that isn't visible on the downscaled version... But at the same time, I suspect most users don't actually want the paint to escape through a tiny gap, so that might be the right behaviour.
You wouldn't upscale, you'd fill 16x16 blocks in original image iterating over downscaled image and if the block wasn't all the same color you'd process it normally. In fact you don't need to flood fill in downscaled image, just do edge detection.
Been a long time since I was a kid playing with Deluxe Paint, but "fill except where I missed a tiny gap" would have been exactly what I wished the fill tool would do! I don't think I've used a fill tool in earnest since :-(
willReadFrequently disables the use of the GPU. All canvas operations will be software rendered on the javascript thread. The actual operations happen not when you make the draw() call, but when you read any pixel of the canvas.
Thanks for the link, it’s great write up. I’ll play with his algorithm to see how it works for me, it may speed up the pre-processing. Its still slower to the user than using the pre generated masks however
I'd expect that you could achieve the speed that won't be perceptibly different from the "preprocessed". The modern browsers JIT the JS code and the canvas abstractions map correctly to the hardware. A better fill algorithm (cache friendly -- the traversal has to be as much as possible through the points which are actually one after another in memory and avoiding unnecessary calls or adding to some structures when loops are enough) and avoiding unnecessary canvas updates should result in instantaneous-enough responses? Using masks is probably a good/necessary idea to provide complex fill patterns, but I'd expect the masks could be made on-demand, which then makes the freehand drawing possible (which would invalidate the precomputed masks).
Apart from algorithm commentary by folks who are certainly wiser than me on the topic: since you’re already using a worker, you would probably benefit from using OffscreenCanvas[0], which is transferrable to worker threads and otherwise should be optimized for heavier computations.
Thanks for the suggestion. I did indeed achieve a significant speed up with OffscreenCanvas, see the commit below. It lets me avoid the slowest part of my original solution, the call to context.putImageData()
This reminds me of an iOS puzzle game I really enjoyed a year or two ago called "Kami 2". Picture a canvas with geometric shapes in a few different colors. The goal is to paint the whole canvas in one color, armed with a limited number of "fill" operations. Highly recommended.
In the past we had indexed color screen modes. You could draw all pixels to the screen once and give each pixel an index into a color palette. When you would start, they would all be mapped to white, but as you start with coloring, you would then just change the color of the palette entry (truly a O(1) operation) and your graphics card would then do the work at scan time. I guess the closest equivalent nowadays would be to do this inside the pixel shader.
This is however not the same operation, changing the palette will affect all pixel of that color, flood fill affects only all the pixels in the connected component you clicked on.
> changing the palette will affect all pixel of that color
Not necessarily, because same color doesn't require same palette index. If you are going to preprocess the image anyway, you can assign one palette index to each connected component.
This has convinced me to add prepossessed fill areas on my kids colouring book for iOS. (no IAP, subscription, no ads, etc, like the general sentiment here)
Been thinking about this for a while, but things are fast enough on an iPad that it took a back seat, nice to be inspired to go that extra mile :)
Modifying it to work with a changing canvas shouldn't be too difficult.
You could simply clear the mask of a bounding box around the changes, and recalculate the masks inside that bounding box. In addition to hitting the "edge" of a drawing area, the algorithm can now also bit the edge of the bounding box and encounter an outside mask - which means the inside mask and outside mask can be ORed to get a new composite mask.
You'd have to take care with the fact that an edit can join two previously-separate areas together, but that'd also be a simple OR. The more tricky scenario is an edit splitting an existing area in two. It is reasonably easy to determine that an edit is trying to split an area, but it is not immediately obvious to me how you'd efficiently determine that no connection between two halves exists outside the edit bounding box: how do you distinguish a "|" turning into a ":" from an "O" turning into a "C"?
Tbh adding a background pixel only needs to check its four surrounding to see if it joined seperate areas, adding a fourground is harder I expect you need Dijkstra.
Oh man this brings back memories. When I was young we did not have Internet at home, so I made a VB6 app for my sister so she could load outlines and color them. So much fun! For both of us :D
EDIT: Later I also made a simple LOGO interpreter in VB6 for the same reasons. Turtle fun!
Yes, it seemed more appropriate for a kids painting app, like the paint is being 'poured' into the shape. It's probably only cool for the first couple of fills, mind you, after that it might start to get in the way.
Might be fun to have a fill that is actually slower, but is parallelized so you can continue to draw/fill other sections as it happens.
It seems hypocritical to make an app that won't show ads and talk about it's implementation on a site full of ads (I see 5 on the page).
Sure the target for this content is developers and not specifically kids, but plenty of 10 year olds program and may be seeing these ads when trying to write their own painting program.
Sigh, ok I've given Wordpress money. No more ads for your delicate eyes. You're welcome for the all the free code, write up, diagrams and fun app for kids by the way. I hope they bring a bit more positivity into your life so you can spread it elsewhere on the internet.
Wordpress' free tier sucks alright. However, I have nothing against ads per se, just ads shown to children. Maybe I'll start giving Wordpress money after all these years of them hosting my blog.....
So no, not hypocritical. Ads are fine, just keep them away from young kids.
This part really resonated with me. Google promotes the absolute worst spammy shit these days. I have to browse tech forums to find clean simple games.
Last week I was trying to find a simple memory cards game and the search results were complete dog shit so I started building my own memory cards game. In a couple hours I made https://memorycardsgame.com/. Then browsing a svelte forum I found https://toddler-games.com which is much further ahead of mine but I'd never be able to find stuff like this by searching for it. Of course none of the good games are filled with a bunch of SEO spam which is kind of the point. Since Steve Jobs didn't let his kid use an iPad we've decided the game isn't needed now.
Nice app.