Hacker News new | past | comments | ask | show | jobs | submit login
Human Shader (humanshader.com)
436 points by bpierre on July 20, 2023 | hide | past | favorite | 124 comments



Pretty soon even tasks like this will be done by computers.


You could get an LLM to attempt this for the slowest shader ever


Perhaps not the slowest ever, but surely the most frustrating.


Here’s a cool project which could potentially help with these handwritten notes they asking for: https://www.youtube.com/watch?v=cQO2XTP7QDw


Next we need Human GPT and Human Diffusion.

We'll start with inference, but maybe we'll eventually convince some folks to do a full training exercise.


Aren’t we all doing HumanGPT right now, pre-trained to some extent, reading prompts and writing answers?


Wild, never thought I'd live to see the day.


This was fun, I forget sometimes how silly all our grade school math worksheets used to be. I always had trouble showing my work for simple additions and multiplication, it's a bit easier when its a self motivated dunking on the GPUs.

It may take 4-5 days and 1000+ people, but it's definitely created a greater sense of community than any faster rendering system. I like looking over the different pixels and knowing that they represent some nerd-sniped engineer like me.


I think something like this would actually make for a really interesting 3rd grade class project, with each student contributing a few pixels.


Ah, I remember seeing some chinese summer math workbook shared online where the answers to each problem could be translated into a bit that could be plotted in a giant grid on the last page.

When completed, it would show a QR code, that upon scanning would mark completion of the workbook and show you the answer key. The implicit idea being you didn't have to be perfect, just good enough so the error-correction algorithm of QR codes was enough to pick up the final image. (Probably extra credit if you could figure out how to do as few problems as needed and then use EC to still figure out the secret link)


This is so genius - if I ever become a teacher I'm doing this.


That's pretty clever... QR codes even allow you to set the percentage of error correction allowed.


It's slightly older than 3rd grade, but for Pi Day 2022, Matt Parker marshaled a couple dozen students to spend a weekend computing pi by hand: https://www.youtube.com/watch?v=dtiLxLrzjOQ


It's really interesting seeing the (what I assume are) error pixels.


The author should implement an error correction system, where 1) the result is withheld from the display until a second corroborating calculation comes in, and 2) if there is disagreement, request an arbitration from a third user whose job is to pick which of the two is the best answer.


I don't know, if they were aiming for perfection you'd think they'd just use a computer.


It is a bit odd; they specifically ask for the picture of the worksheet, and say they’ll check it. I wonder if the just haven’t gotten around to it yet. Automatic error checking would be nice (IMO just let multiple people do each pixel and take the most popular result).


They ask for the worksheet to check that you're not cheating. They don't check the pixel values. I think much of the charm of the image is seeing the errors.

By the way, there's more going on in the shader algorithm than you might expect. Here's an explanation of the worksheet:

  u, v are coordinates relative to the center of the image
  h is radius from center, squared
  Section B generates the ball:
  B3-8 generates the reflected color on the ball.
  B9-11 applies the diffuse illumination to the ball.
  B12 adds the illumination highlight.
  Section C creates the ground:
  C5 puts a shadow directly under the ball.
  C13 is the cast shadow of the ball.
  Section D creates the sky with a simple gradient
  Section E converts the image from two-color to three-color


Hello, my name is Inigo Quilez, you skilled my father, prepare to draw half the shader on shadertoy :D


The errors (wrong colors) colors is the charming part to me.


They already know the right answer for each pixel, presumably though?


If they didn't then maybe they could write some kind of computer program to find them.


Or a story if anti-aliasing but getting each pixel computed multiple times and averaging the results. Over time the image should get better as the errors are diluted. It will significantly increase the amount of HPU time required, but with enough human processing units willingly taking part the wall-clock time might not be too badly affected.


Yes, I find it interesting to note the different types of errors. There are some random errors, but many of the errors seem to be repeated. It looks like a lot of people ended up in the wrong code path, resulting in blue/white pixels in the lower half, while fewer people made the reverse mistake (yellow pixels in the upper half). There are a lot of purple pixels in the upper right; I wonder what led many people to the same mistake.

Another interesting thing is to try to reverse-engineer the worksheet: section B is the sphere, section C is the ground, and section D is the sky. But then there's the lighting model, shadow, etc.


A lot of the magenta pixels are people thinking that ####|1 means "leave 1 digit" instead of "remove 1 digit"


All of the pixels were already taken by the time I tried, but I did an "error" pixel (11, 26) just for my own pleasure.

Somehow I got RGB(255, 50, 194), which is different from the value posted on the chart. Actually, I'm not sure how originally they got 0, since G comes from R and B, which are both positive, and the expression is multiplication and addition.


This was as much fun as it looked like. I got some nice paper, a nice pen, and made neat boxes for steps, rough columns, etc. I also converted the final RGB to hex for fun, and made a rough representation of it via colored pencil combinations. Total time was about an hour, I think, but the time wasn’t important to me. This was enjoyable.


It’s funny that “This was as much fun as it looked like,” could be interpreted accurately in either direction, depending on your personal predilections.


Ambiguity makes for interesting thoughts.


I used to think this way but I have become frustrated at my inability to intentionally convey interesting thoughts. I now strive to minimize ambiguity.


Yeah, this was a tad flippant. At work, I’ve often had something I write be received in a wholly different manner than I intended. My coping strategy for now is to preface anything that I think might be viewed differently with a “this does not mean…” warning. But of course, if I knew all of the sentences that might be received differently, I wouldn’t be in this situation to begin with.


In other words, relative point of view.


So you converted the RGB to CMYK to get a subtractive color space for colored pencils?


Nothing that advanced I’m afraid - I eyeballed layering different colors until it looked right.


This is a particularly fun exercise when you recall that the original "computers" were people doing arithmetic, not at all dissimilarly to what’s done here, just with less parallelism. Though they did at least have mechanical adders and multipliers!


Los alamos used a room full of women for doing simulations when their IBM machine was out of order:

We needed a man to repair the machines, to keep them going and everything. And the army was always going to send this fellow they had, but he was always delayed.

Now, we always were in a hurry. Everything we did, we tried to do as quickly as possible. In this particular case, we worked out all the numerical steps that the machines were supposed to do — multiply this, and then do this, and subtract that. Then we worked out the program, but we didn’t have any machine to test it on. So we set up this room with girls in it. Each one had a Marchant: one was the multiplier, another was the adder. This one cubed — all she did was cube a number on an index card and send it to the next girl.

We went through our cycle this way until we got all the bugs out. It turned out that the speed at which we were able to do it was a hell of a lot faster than the other way, where every single person did all the steps. We got speed with this system that was the predicted speed for the IBM machine. The only difference is that the IBM machines didn’t get tired and could work three shifts. But the girls got tired after a while.


This should be attributed to Richard Feynman.

https://calteches.library.caltech.edu/34/3/FeynmanLosAlamos....


Pipelining is a pretty effective optimization.


as depicted by the cool movie Hidden Figures

".. Based on the unbelievably true life stories of three of these women, known as "human computers", we follow these women as they quickly rose the ranks of NASA..."

https://www.imdb.com/title/tt4846340/


Quote from imdb movie description:

> mathematicians that served as the brains behind one of the greatest operations in U.S. history

In this case, aren't they more like hands than brains?

The brains should be the ones who created the code. These people are just processing it, it's like manual labor.


An unkind assessment.

They used their brains to perform the calculations, I don't see how you could see it otherwise.


Sure they used their brains. But "brains behind the operation" is a term of art.

Is the McDonald's cashier the "brains behind the operation" because they count change as part of their duties?

Counting change accurately is very important to the continued successful operation of a retail establishment. But it's menial work.

Even Wikipedia agrees with me:

> Alan Turing described the "human computer" as someone who is "supposed to be following fixed rules; he has no authority to deviate from them in any detail."

I think the term "human computer" is extremely misleading without the cultural context behind the term, i.e. that these people were essentially doing 5th grade math worksheets all day. Reading numbers, plugging them into a calculator (yes, really), and writing down the results.

Usually we call these mechanical turks?


“The reason that these pre-electronic computation jobs were feminized is they were seen as rote and de-skilled,” says Mar Hicks, a historian and author of Programmed Inequality. It wasn’t true, though: “In a lot of cases, the women doing these computation jobs actually had to have pretty advanced math skills and math training, especially if they were doing very complex calculations.”

The work could require superhuman endurance, though. “They had to keep working eight hours a day doing the same equation over and over again—it must have been mind-numbing,” notes Paul Ceruzzi, author of Reckoners: The Prehistory of the Digital Computer.

from [this](https://www.smithsonianmag.com/science-nature/history-human-...) article

If I take the words of Alan Turing then your interpretation of menial work, then programming is menial work too. You have fixed rules that you can't deviate from. Is all programming really menial? I would argue that it can be very mentally taxing, the same way math or any intellectual work can be.


From the same [article](https://www.smithsonianmag.com/science-nature/history-human-...) (during the space race)

At its bases, NASA employed nearly 80 black women as computers, says Margot Lee Shetterly, author of Hidden Figures. One of them, Katherine Johnson, was so revered for her abilities that in 1962, John Glenn asked her to personally verify the flight path of his first launch into space on the Friendship 7 mission. The astronauts didn’t trust the newfangled digital computers, which were prone to crashing. Glenn wanted human eyes on the problem.

“They had a tremendous amount of respect for these women and their mathematical abilities,” says Shetterly. “The male engineers often were not good mathematicians. So the women made their work possible.” Still, some friction existed. Women who asked for promotions got stonewalled or turned down: “For women who wanted to move up, who wanted to be supervisors—particularly if that involves supervising men? Not so much.”

The women wouldn't have been employed without the engineering work, but the engineering work wouldn't have been possible without these women. They were equally the brains behind getting things to space. A sizable number of these women later became programmers, because building the computers was seen as the really difficult task. Coding was dull work. Writing code that gets people safely to the moon and back was obviously trivial.


> But "brains behind the operation" is a term of art.

Which art?


Over the course of the film, the three women cover a broader range of capabilities than just doing arithmetic, and that tagline is not unreasonable. I'm not being very specific on purpose - you should watch the film.


I see a missed opportunity to remove the math and instead give people a prompt, and one pixel to shade, and then refine with each next pass.

Human generative reverse-diffusion AI.


Not dissimilar to /r/place[0] - which gets messy.

[0] https://new.reddit.com/r/place/?screenmode=preview


Generative AI would also be messy if:

1. You have no prompt.

2. You have 1 iteration.

Hence why I suggested prompt and multiple iterations. It's a very subtle tweak, but in aggregate behavior, everything is subtle and has huge effects.


You need an incentive too. Ai has an incentive to make the thing correctly. Place does not.


Sure. I mean not sure if AI has an incentive per se. It was selected to have this natural determination :)


> Created by Inigo Quilez

Of course...

It you are into shaders and don't know that guy, well you are one of the "lucky 10000" and you should check him out. ( https://iquilezles.org/ )


I work with him every day and pinch myself


It feels like a missed opportunity to not show each pixel's worksheet: It would be cool if you could click on each pixel, and it opens a PDF scan of that persons calculations.


Sadly you can’t do that on a site without having people post very inappropriate things, so it would require a human moderator, which I speculate makes it unlikely to happen.


It can be done. To submit a pixel you have to take a photo proof of your calculations on paper, it’s already “moderated”


Are you certain a human is inspecting the image? Or is this “moderation” currently automated?

Yes obviously it “can” be done, I’m suggesting it won’t for long, because there’s a big difference between submitting something to the site mods versus submitting something that is anonymously exposed to the public.

I’m saying this as the owner of a site where I made the mistake of allowing crowd-sourced image content to be anonymously served to others. It didn’t take long before not only was there NSFW content, but there was also illegal content.

* edit OH BTW I only just noticed this site was created by IQ, who has already dealt with this exact issue on ShaderToy and had to restrict and remove user submitted images due to abuse! The decision to not show people’s images is almost certainly intentional and by design.


Interesting, how come there are obviously incorrect pixels ?.


the moderators are checking that the pixel was computed by hand, not that it's correct


The page does actually say they want to see intermediate results “so we can validate your work.” (Edit that quote has been removed now.)

But there’s no promise that they will. The photo feature may be more of an automated speed bump, a way to reduce silly answers and pixel spam, and let people self-select, than an active human moderation tool. Moderating is boring and expensive in time and/or money, why would anyone actually sift through thousands of hand-written pages of arithmetic?


Validate is slightly vague here. I interpreted it as "so we can validate your work [was done by hand]."


Fair enough, I guess it is open to interpretation. The language on the page has changed now, the quote I posted is no longer there.


> why would anyone actually sift through thousands of hand-written pages of arithmetic?

2840 pages. Doesn't seem crazy. Page looks like it was written by hand? Next. At two pages per second it's less than half an hour of work.


Think they're just people making math mistakes (one of the reasons why we don't usually implement shaders by having people compute values by hand).

If you compare them to nearby "correct" pixels, they usually just have one of the three RGB values that are sharply different.


Some of the calculations are pretty big and the operation to trim LSD is easy to mess up if you forget to round.


I had a lot of fun doing this, and while doing the arithmetic, figuring out what the shader algorithm is actually doing. Such a great idea, turning internet users into the world's slowest and most inaccurate GPU


I don’t know anything about shaders, so forgive the question.

My understanding from looking at the worksheet is the person who created the target image has created three separate formulas (depending on the area of the image), that when you feed in the X and the Y coordinates, it spits out the correct RGB value for that pixel. Is that correct? That’s wild.


Basically, yes. A shader is basically a function which passes in the x,y coords and spits out a color, and this is done for every pixel on the screen.

So a simple shader would be something like (in pseudo code):

function shader( x, y ) { if( x > 0.5 ) return white; else return black; }

would color one half of the screen black, and the other white.


At their heart, that's what pixel/fragment shaders are. They're calculations for determining what color a pixel should be.

A really basic one returning red for every pixel would make a red screen. The most common use would be rendering a textured mesh and determining how lighting should effect it (is it skin? A metal surface?).

Then you can look at crazy things like https://www.shadertoy.com/ which are all purely made in pixel shaders (also that site is run by the creator of the Human Shader)


Hmmm, this looks kinda familiar....

https://www.shadertoy.com/view/ddjBRK


What a great experiment. Math looked a bit daunting at first (pixel 22,34) glance but it really wasn't bad. Took me about 4 minutes in total to do the math.

Highly recommended for anyone who wants a chill afternoon challenge. And the best part is seeing the final image come together.


73% in 94h at the moment.

This is 0,000002157210402 FPS so far.


I'm sure that'll be fixed by release. You know what they say about premature optimization.


Amazing experiment from the man who brought us shadertoy.


And yet another task outsourced to the public, abusing human brains for gfx calculations and hn post as api. Crypto mined the matrix style.


Next year the worksheet is just the rendering equation and a list of vertices/lights/transforms


Fun, but it would be interesting if on the worksheet there was an explanation for each calculation


I found it interesting working out what the different sections were on my own. I can't write shaders, but it was enough to roughly judge which parts were ground, sky, and sphere, and where the ground included the shadow. I didn't spot the ambient occlusion or lighting on the sphere, though.


That seems like a... lot of work.

Shouldn't have claimed a pixel.


It took me 10 minutes to do the whole computation by hand, so I wouldn't say it's a LOT of work. At the same time I believe this is also one the points of this experiment, to show how much work goes into computing a single pixel value for a very simple 3D scene which makes us appreciate more that our GPUs can do this work billions of times per second.


It's double the work if you skim the instructions too fast and pick your own empty pixel and compute it, only to find that you need to be assigned one!


If you like this you will enjoy decompressing Pokémon by hand:

https://youtu.be/aF1Yw_wu2cM

https://www.youtube.com/live/OBVwnUH8Eek?feature=share


so someone can just write a script to generate the full image right? since instructions are the same for each pixel. Would make it easier to check your work..

edit: https://imgur.com/a/UO37L1b


woah woah dude, spoilers!


Why it so low quality? Imgur issue?


Now if only I could sell an ad on each pixel, then we'd really have something.



this is like amish folding@home


Reminds me of the good old days when you didn't need an expensive GPU to play the latest path-traced games, just a pencil, some paper, and a few friends.


*few hundreds friends


Next time something like this gets done, add a box about estimated time to compute and amount of people computing to the submission form, so that data will more closely describe the amount of work hours used.

Heuristically we can still get an estimate: Group names to individuals and groups. Guess average group size. Take difference between claim and submission for time it took to compute.


> Claim a random pixel for yourself [...] yours, and yours only, for the next 8 hours. [...] You can claim up to 4 pixels at a time.

Why not 1 at a time, and you can claim another immediately after completing the previous?

And if you have 4 pixels awaiting verification, you cannot claim another pixel unless and until one of those pixels is verified?


I assume to stop one dedicated person doing too much of the image. The novelty here is the community aspect.


I always wanted to see this principled, computer aided work, but for social / real world needs. A lot of friction and pain in our lives comes from the difficulty of regrouping and organizing.. when you have a framework in place to accrete everybody little efforts into a big coherent whole .. I would guess it makes everything fun and fulfilling.


Now that it's almost complete, I feel happy in releasing the solve and renderer code that many might consider a cheat.

https://github.com/MarquisdeGeek/HumanShader


People were too quick to claim so I wrote a hacky script to try to auto-claim in a loop and then ended up with a PNG in my terminal, beware, save the response to the request if you're going to automate the claiming part.

Well I got another pixel anyway.


Given the occasional hallucinations, best to have a computer in the loop for validation.


For the lazy: pick a pixel in the sky to skip to section D (less calculations)…


You get a random pixel assigned, so you cannot choose.


Now I want to see the results of one done by chatgpt.


A bit too much to ask from a user.

It could be split up into easier steps, like adding a pair of 4-digit numbers or multiplying a pair of 1/2 digit numbers.


I mean yeah but it's rendering

Let it cook


I love this. It would be super cool if I could see other people's worksheets, but the image hosting might be a nightmare


It would be interesting to see a computation energy analysis. Joules/frame for human vs gpu shader kinda thing.


How are they planning to get rid of errors?


A stochastic approach would work quite well. Have each pixel computed by n people and take the mode.


I suppose by using a non-human shader and comparing the results :D


I suppose they won't use a non-human shader, since it is the whole point.


Well then why is showing so many mistakes? It makes the experience worse.


I mean if you want to do it perfectly with no chance of error, let a computer do it, or just find the reference image. This is communal mathematics, and the errors are a part of the experience.

Saying the mistakes make this worse is like saying "I'd really enjoy spaghetti, I think, but all the noodles are ruining it."


the final step in the worksheet provides an error correction heuristic and procedure:

> Thanks a lot for being part of the Human Shader, go find your pixel in the public canvas! Tip: if its color looks wrong to you, feel free to review your calculations and submit again with the same code!


By scrubbing through the duration I haven't found a single error pixel that has been corrected, though. Which is a pity, as they do stand out quite glaringly.


by eliminating the pixels where the proof-of-work is incorrect


So this is just a way to get tons of handwriting samples and computation data right?


Not everything is a conspiracy. And what is “computation data”?


Post Butlerian Mentat GPU v0.1


Quite enjoyed this, and it wasn't too much computation, about 10 minutes worth.


My pixel took an embarrassing 30 minutes but I did it all by hand!


> Error: All pixels are claimed, please wait!


very interesting exercise,,


I claimed a pixel and computed it using a calculator.


They could just use a computer to do this?


They are. Computer was an occupation filled by humans long before it was mechanized.


Sure but that’s not the point?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: