I love a light hearted thing every now and then, and this tickles me very much!
Reminds me of a "user locator" website I once saw. It claimed to use advanced algorithms to locate the person using the website, so when you clicked go it spent a while printing various "reticulating splines" type technobabble before finally showing a picture of a big finger pointing at the user with big H1 text saying "You are right there".
I like these one off fun sites. Sort of like how 'zombo.com' was making fun of the .com stuff . Unfortunately that one will be done as flash goes away.
This is what the internet was supposed to be. Well, not totally that's all it was supposed to be. But random websites that people put together just because. I'm sure they learned something by doing it, so it wasn't a total waste for them. And now we all get a bit of levity in our day.
I just looked at the latest code and it doesn't seem like that's the case any more. It's just a simple grid and instead images are picked and scaled to point more accurately from what I can tell. Also it's React now...
If I'm reading it correctly, it loads up an array of points [0], and after you move your cursor it loops through all points and finds the one with the nearest distance [1]. It seems like every image has the URL structure /images/[index of point].jpg.
The interesting bit is it keeps a list of the 4 point/images you've most recently seen, and will ignore those when searching for the new nearest point/image. Then scales that image to point at your cursor more accurately. This is why it feels pixel-perfect or so, because you won't get a repeat image until you refresh, or try the same location 5 times.
To test that out, ctrl+tab away and back to the tab without moving your cursor. You'll always get a different image. Then refresh the page, and ctrl+tab away and back again without moving your cursor. You'll see the same images in order.
Also neat that what was once a jQuery app now uses React and Typescript.
Me too, at first I thought it was a follow-up to that post the other day about Linus Torvalds' 'more elegant' linked list implementation using pointers to pointers.
I had expected that too and before I knew it I spent time looking for Drake's approving finger from the meme or Uncle Sam's "I want you" gesture. I couldn't find either.
It would be cool if this was a bit faster, so you'd be able to move the mouse around and see it flashing up different pictures of people pointing at it in real time.
I remember Jonathan Puckey* explaining this project at Resonate festival: the delay before showing the image is artificial and meant to create suspence.
What algorithmic bias is there? It’s literally just a grid that maps your pointer location to a picture, no AI involved. Sounds like you’re calling the author a racist.
Or better yet: "Important" and "relevant" things should not be limited to serious, constructive, work/society relevant things. Fun is very very important for the mind and soul.
> ML methods would require labelled data to be trained.
Not necessarily.
One could train a model on synthesized data using e.g. blender, some programmable 3D people models with hand controls and some generic background images to paste them on.
Anyway, for a Multimedia Information Retrieval course I chose to do my term project on training a neural network with synthetic data. In particular I modded Minecraft such that when I press a button it saves two screenshots: one regular and one where the game renders a depth map instead. I used this to generate ~1000 samples with perfectly accurate depth maps. Because of the mods, texture pack and world I used the data was somewhat realistic: https://i.stack.imgur.com/Zai51.jpghttps://i.stack.imgur.com/eamMR.png
This data was then used to train a neural network to predict the depth map of unseen images. It was relatively successful, but requires more data and more research, I only had so much time for a term paper.
I guess people upvoted this submission because they think the website has some sort of machine learning algorithm, which is the daily bread on Hacker News.
Personally I don't think there's any machine learning needed for this (but who knows what they used). If I were to build a replica, my approach would be to grab a decent amount of suitable photos, manually mark the pointed areas in them, and have some sort of scaling/cropping/offset handle near misses.
I upvoted the submission because I think it's a cool trick either way.
If it were "lot of prep" - e.g. a manually-tagged and fixed database of about 100,000 images of people pointing (assuming 100 columns and 100 rows in the browser-viewport) then that's still small enough for an instantaneous response from the web-server, so the relative long busy-time suggests that _something_ computationally expensive is running in the background - such as a "this-pointing-finger-does-not-exist"-engine.
It could just be fake busy. Timing is everything in comedy, so give a first time user a little suspense to think advanced computer magic is happening before the punchline arrives.
Exactly. Looking at the JS code (even minified), you can see that it's calling a setTimeout for effect - possibly also waiting for image load-, but all the data is there already.
If I had a piece of ML that could do that, I'd just run it once on the server and grab a coffee as it generates a set of 100k labeled images for me.
IIRC, the "this X does not exist" sites also don't run ML on request, but serve cached images. At least that was my impression from getting the same image more than once when I spammed F5 fast enough.
Reminds me of a "user locator" website I once saw. It claimed to use advanced algorithms to locate the person using the website, so when you clicked go it spent a while printing various "reticulating splines" type technobabble before finally showing a picture of a big finger pointing at the user with big H1 text saying "You are right there".