Hacker News new | past | comments | ask | show | jobs | submit login
Interactive intro to shaders (mayerowitz.io)
555 points by superMayo 10 months ago | hide | past | favorite | 71 comments



I finally found the courage to write and expose myself to the internet. I've always wanted to learn shaders so I thought it would be nice to document my learning and share it with others.


I just commend you for writing on your own blog and making an interactive post; as opposed to some shitty Notion/Medium/Dev feed the google-algorithm, AI copy pasta.

Honest blogging is dying in the shadows, unseen of the god almighty algorithm. It's sad times. We're deep in the Star Wars episode IV of the Internet.

My only feedback is, merci and more of it please!


Welcome to the Internet, superMayo!

If you want to see what the Masters can do with shaders, let me introduce you to Inigo Quilez and his shader art: https://www.youtube.com/watch?v=BFld4EBO2RE

EDIT: I did not notice you are the author of this article. It's very well done, and I've been looking for more approachable and interactive tutorials on the arts of shader coding.


If we're going to do the link to IQ, then probably should link to Shadertoy, where there are several thousand shaders to look at for ideas.

IQ's personal page on Shadertoy: https://www.shadertoy.com/user/iq/sort=newest&from=576&num=8

Shadertoy is created by Beautypi (Inigo Quilez and Pol Jeremias).

And all of IQ's introductory articles on using Shadertoy.

// Input - Keyboard : https://www.shadertoy.com/view/lsXGzf

// Input - Microphone : https://www.shadertoy.com/view/llSGDh

// Input - Mouse : https://www.shadertoy.com/view/Mss3zH

// Input - Sound : https://www.shadertoy.com/view/Xds3Rr

// Input - SoundCloud : https://www.shadertoy.com/view/MsdGzn

// Input - Time : https://www.shadertoy.com/view/lsXGz8

// Input - TimeDelta : https://www.shadertoy.com/view/lsKGWV

// Inout - 3D Texture : https://www.shadertoy.com/view/4llcR4


That was an amazing video, thanks for sharing! I really appreciate that there's a link to the shader code as well. Makes me want to dive into graphics again!


You can improve your antialiasing, assuming you're willing to use the well-supported OES_standard_derivatives extension (or WebGL 2). Instead of doing smoothstep(0.0f, 0.01f, dist); with the constants picks sort of at random, instead do smoothstep(fwidth(dist), -fwidth(dist), dist);


Didn't know that, thanks for the tip!


This feels like the absolute best way to give an introduction to shaders, given that they're completely graphical. The way the interactive code is embedded within the article, instead of being a link to an excercise works super well.

Thanks a lot for making this, keep doing what you do!


This post got me to log in to thank you for taking the time to write this, and to congratulate you on how enlightening it is.


Tangential to the main topic, what generative art artists are you following and/or where did you look to find them?

I found a couple myself (@D_VISION7 @lv374 @beesandbombs @HAL09999), but ran into roadblocks trying to find more. There are a few fractals here and there, etc. Shadertoy seemed mostly like math demos/challenges rather than art, or at least it doesn't have an easy way to find the artsy ones.


On mastodon you can follow hashtags, so follow #generative, #procedural, #creativecoding etc, not just people; you get a lot of serendipitous finds this way. What shows up on the different tags varies a bit: #generativeart picks up a lot more AI-generated guff, not interesting to me at all, while #generative tends to get less of that and more hand-coded art.

I do find a lot of NFT pollution in these tags so I've got words related to that filtered out.

I follow some actual people too and you can find good follows from who they RT but following tags made mastodon much better for me than t**ter ever was, and actually makes it worthwhile posting with tags.


Can only recommend checking out https://www.fxhash.xyz/. It's an NFT platform for generative art. Think of NFTs what you will but that platform really brought together lots of artists and in my opinion the field has advanced a lot these past two years due to that platform existing.

Personally I really like @KilledByAPixel and @piterpasma simply from a technical perspective. If you're into vector / plotting work @zancan is definitely worth checking out.


Here’s a few artists you might like: @FEELSxart @ilithya_rocks @generativelight @thresfold @Tezumies @_nonfigurativ_ @Olga_f2727


This is very cool, just what i needed actually.

Also your website is very sleek, minimal and your projects are tasteful.


This article is super fun. Thanks! I was curious if you’ve learned WGSL, the shader language for WebGPU? I’m struggling to wrap my head around what’s similar and different.. a fun and interactive guide like you’ve done here with GLSL would be amazing.


https://webgpufundamentals.org/ is only a few months old and is the best WebGPU resource I've found.


Thanks for sharing! I started learning about the Godot Engine recently, and I was just beginning to fiddle with shaders. This was an excellent article to give me a few more points to start with. But, oh boy, the curve is steep.


Looks like a good way to introduce newbies to the world of shaders, many thanks for putting this out, I already have a recipient in mind for this content.


Impeccable timing, I was just about to try and get into shaders. Thank you so much for writing this, I will be reading it on the weekend.


First sentence: "What if I told you that it could takeS..."

You might want to correct that. Now reading on :)


The Mona Lisa paint cannon is an incredible GIF, do you know where that's from?


Looks like it's from a talk the MythBusters guys did at NVISION 08: https://www.youtube.com/watch?v=aa3OGgBkRiQ


Thank you!


Nice work - this is really cool!


Don't be shy, I guarantee that I'm worse at shaders than you are. Thanks for your effort


The article is quite nice. However, it glosses over the primary problem with shaders.

A shader is a pain in the ass that most programs and applications don't want.

3D stuff likes triangles and the GPUs are happy to slot into that abstraction. Shaders are useful to interpolate over those triangles.

Triangles are mostly garbage for everybody else. 2D rendering wants paths. Font rendering wants paths or pixmaps. GUI's would work much better with paths and pixmaps. Compositors really want pixmaps. Video decoders really want pixmaps and parallel rendering.

What everybody non-3D wants is rectangular pixmaps and access to computation directly to those pixmaps. And GPUs don't like this very much, and shaders don't map very well to this.


Having written a 2-d curve renderer, what I want is parallel compute and high bandwidth; gpus deliver. (Especially with newer interfaces that support scatter-write; not sure how much penetration these have in the browser yet.) It's true this is not what you want at a higher level, but it serves as a fine base to implement higher-level abstractions. You could support them in hardware, but it's not at all obvious what the advantages would be; no one complains that cpus don't have architectural support for for-loops.

Edit: upon a reread, I don't really understand what your problem is with gpus. You can ignore the vertex processing pipeline entirely, drawing just a single fullscreen quad (or use a compute shader); the gpu will handle this with aplomb, and this is the sort of thing the linked article is talking about too.


>A shader is a pain in the ass that most programs and applications don't want.

As a devil's advocate: most programs don't necessarily require the GPU to achieve what they want. 3d stuff nigh requires such parallelism once you go past a very small scale.

That was the mindset in the 00's at least. Software has gotten more complex to the point where the GPU may be a desired optimization to make, but the GPU pipeline has always been strict and closed down, and the paradigms from single to multi-core programming often require different algorithms. GPGPU programming can liberate you from the need to work in triangles, but you still need to take a completely different approach if you want to enjoy that parallelism.


What part of a pixel/fragment shader does not map very well to pixmaps?


Only interpolating based on the edge? Needing to carefully map float coordinates so you don't wind up with fractional pixels mapping incorrectly? Inability to look at pixels to each side to compute something?

You can work around the limitations in vertex or fragment shaders, but, in many cases, you're having to unwind a lot of what the vertex and fragment processing does.

On of my favorite examples is drawing a screen/pixel-space rectangle and then drawing a border of the rectangle in a different color and specifying the number of pixels that border should be wide. Oof. You have to specify 4 triangles, make sure they don't overlap, specify them in a particular vertex order or annotate them with extra data so you only draw one of the three edges, make sure that you only draw each pixel just once or your blending goes all to crap, etc. Whereas, you can divide the rectangle into chunks, send each chunk through the compute pipeline, and it's stupidly straightforward.

Or, if you want simpler, just draw a line n pixels wide. If you don't have an explicit extension to do this, it's really a pain.

2D graphics simply wants very different graphic and computational primitives compared to 3D.


Interpolation works on neighboring pixels though? Not sure what you mean there. Looking at neighboring pixels of the input pixmap is trivial as well.

The fractional pixel mappings do get you - but I think I recall an integer mode somewhere in GLSL 3.2.

Your example steps outside the shader boundary, though. If you try to use triangles to draw pixmap, of course you'll have trouble. That's why pixel shaders are the right tool for the job.

The only disadvantage they have is that the pixmaps are immutable within a single pass, so you can't (easily) draw the border together with the inner rectangle in one pass. But if you're used to functional programming, you won't even notice this.

I agree that shaders are not the right thing when you need mutability, as you seem to, but working with 2D graphics doesn't necessarily require mutability.


This is neat! I've gone down the SDF rabbit hole a bit, recently. I'm glad you added some links to iq's site; he's got great stuff.

I feel compelled to link to his "happy bouncing" shader, which is phenomenal IMO: https://www.shadertoy.com/view/3lsSzf

(with associated 6 hour (!) youtube video on its creation)

That's a juicy ~500 lines of code (:


Inigo Quilez is a wizard, the shader community owes so much to him!


I've often tried to get interested in the subject, but could never find an accessible gateway: I found it with this introduction! Super fun and playful, thanks, can't wait to read the rest!


Minor nitpick: you mention cel shading but spelled it cell -- the term comes from cels used in hand-drawn animations and the quantized tones used in shading there.


Meta-nitpick: cel stands for celluloid:

https://en.wikipedia.org/wiki/Cel


Fixed-it, thanks ;)


This is really nice. I'm a former-artist-turned-programmer and every now and then I get the itch to dig into graphics programming. I've written a couple very basic shaders, but once it gets into the math (which is...very early on), I hit a ceiling. I went to college for art rather than comp-sci, so my math skills are virtually non-existent.

Anyways, well done, love the article.


Great intro, hope it will be continued. Too often these kind of things start with a great intro and are then dropped.


I have never dealt with shaders, so pardon me if it's a very basic question. In a single frame from a game, are shaders essentially all that are being used to draw it?

Or do we have basic shapes like triangles, squares, circles, etc and the shaders go on top of it, drawing shadows, smoothing edges, etc?

From the example, it seems like you can create a shader to draw any object in a scene, and then I imagine you compose other shaders to get shadows and lightning and all of that. In the very limiting experience I had with drawing, I drew shapes but never through shaders. I always thought they didn't draw the objects themselves.


(Note the following is a simplified description of the classic forward rendering process; the so-called deferred rendering techniqure is a bit different.)

A GPU turns an abstract vector shape like a triangle, defined by three vertices and data such as a normal associated with each vertex, into a stream of fragments, one (or more if multisampling) for each pixel in the output buffer that’s covered by the shape. This part is all done in hardware.

A fragment is a pixel coordinate plus user-supplied data that’s either constant, called uniform, or the aforementioned vertex data interpolated across the triangle face, called varying. This interpolation business is again done in hardware and not programmable.

The fragment shader takes a fragment as input and based on the data computes a color, which is (after a couple more stages) output on the screen (or offscreen buffer) as the color of the respective pixel. This could be anything from a constant color to complex lighting calculations. In GPU rendering, this is all massively parallel, with countless fragments being processed simultaneously at any moment. Shaders are pure, stateless functions: the only data they can access is the input, and the only effect they can have is to return a color (and a few other things like a depth value).

So in a nutshell, the GPU hardware is responsible for computing which pixels should be filled to draw each triangle, but the fragment shader’s responsibility is to determine the color value of each of those pixels.


Shaders do all the drawing, but it does so in different stages. I won't explain the entire graphics pipline[1], but a lot (some 90%+) of what people casually think of as "shaders" for doing lighting effects are the fragment/pixel shader stage of the renderer.

there are other stages (vertex, tesselation) that draw those basic shapes before the fragment shader draws "on top" of the scene.

(there is also a lot more to what I described for fragment shaders. e.g. deferred rendering[2]. But that's an equally large topic to get into).

1: https://vulkan-tutorial.com/Drawing_a_triangle/Graphics_pipe...

2: https://learnopengl.com/Advanced-Lighting/Deferred-Shading


Yes, the color of every pixel is ultimately determined by a shader program, but as you might expect it's more complicated then that.

There is what is referred to as a graphics pipeline consisting of a mix of fixed-function hardware stages and programmable stages. At a high level, it does the following: 1) the GPU accepts a set of 3D triangles from the CPU, 2) a 'vertex' shader program transforms (flattens) the 3D triangle vertices into 2D triangle vertices with pixel coordinates, 3) the GPU rasterizes the 2D triangles to determine exactly which pixels the triangles cover, 4) a 'pixel' shader program is run for each covered pixel to determine the color of the pixel, 5) the resulting pixel color is stored in a frame buffer (which may involve blending it with the existing color). This 'pipeline' is then repeated many times (with different triangle meshes and shaders) until the whole frame is drawn.

Hope that helps!


Yes, you got the right idea. AFAIK every type of code running on the GPU is called a shader (eg. special data operations are even called "compute shaders", although they are a different beast). All the operations you mentioned (colors, shadows, shading, image-effects, general image-processing) are achieved through parallelized computing combining lots of data arrays (vertices and their properties, source textures, pre-computed functions, target textures, buffers, etc).

For example, to get light and shadows, your shader should have access to some (probably global) variable about the position and direction of eg. a spotlight. Very often composite lighthing is achieved by combining multiple shader passes (a base pass for global ilumination, and one for each light for example), each literally adding more light (additive pass). Now, in order to avoid adding light for pixels where the light source is blocked (ie. shadow) the most common technique is using what's called a Z-buffer (just a floating point texture). You want to know for each light in the scene where their light reaches, so (before all lighting is applied) you set up a single shader pass that combines all solid geometry on the scene and using the light position and direction as the camera transform, and use a special shader whose only purpose is writing the objects distance to the Z-buffer. Now, every time you want to know whether a point in space is reached by your light, you go about sampling this Z-buffer (after doing some geometry) and compare the point's distance to the saved value in that direction. Yes, it can be very buggy and precision errors abound, and every engine worth their salt already does this for you, but lets you get in there and modify the process.

Everything else are variations on this theme. Deferred rendering is rendering data instead of colors into an intermediate texture which is later processed to get the colors. Blur effects are 2D convolutions of the rendertexture (eg by a Gaussing kernel). Tesseletion shaders are about generation new geometry in the vertext shader. Even drawing text is achieved through font atlasing and small rectangles.


You can draw any object with fragment (aka pixel) shaders. Because some specific math techniques can be used to draw shapes, regardless of the technology (SDF, trigonometry, ect).

So some talented artists are pushing the bounds and wrestling with performance trade-offs in the fragment shader.

Fragment shaders are more commonly used for making full screen filter effects (color correction, ect).

Shaders are also used to make textures and materials on basic objects. Material artists often generate textures with shader math.

Many visual effects are made by using shaders in creative ways.

Shaders are run on the GPU in a parallel, wave-like fashion. Many, many, many threads run across the same data in one wave.

In some cases shaders are much faster than CPU branching code. Shaders also have easier access to some rendering data.

So they are a good space for creative special effects.

Any object in a game with a high level of surface detail is a common target, to shift that detail onto a shader.

Ocean surfaces, tesselating meshes, ect.

There's many other uses, because GPUs are powerful and flexible.


> Or do we have basic shapes like triangles

Generally this - the SDF mechanism is very clever, but that's not what game engines tend to do, their geometry comes from triangle-based tools used by artists.


SDF is clever, but not as much as triangles. :) SDF is unfortunately slow because of the ray-marching loop.


In case anyone else was seeing the images as flickering noise, my fix was to copy the image from the browser and paste those somewhere else. You can view the images correctly.

Link to imgur. The first image is screenshot of what I see in the browser. The rest are actual images after pasting to imgur.

https://imgur.com/a/F4203rz


That's strange, what browser are you using?


Chrome / Fedora 38 64 bit / Wayland / Radeon Vega 3 graphics.


> Wayland

Likely culprit?


Unlikely. This is what gets installed and activated by default on Fedora.


This is my understanding of shaders:

Drawing a line on the CPU: a function looping through each pixel between point A and B and draw one pixel at a time sequentially. Runs once with exactly as many steps as there are pixels in the line.

Drawing a line on the GPU: a function checking if the pixel is on the line or not and draw if it's a match. Runs on all pixels on the screen at the same time, even ones that are far from the line.

Is this correct?


Not exactly. First, a GPU can't run a pixel shader against EVERY pixel simultaneously. A typical screen might have about two million pixels, while GPUs max out at a few thousand concurrent 'threads of execution' i.e. it effectively draws in chunks of a few thousand pixels at a time. Second, the GPU doesn't need to run pixel shaders against a whole screen, it's capable of running shaders against any arbitrary shape you like using triangles. So the efficient way to draw a line is to send the GPU two triangles that match the line geometry that you want, and then only run the pixel shader on the pixels that the triangles overlap. Much more efficient.


Very nice, I've tried looking at a couple other tutorials in the past but they always assume more prior knowledge than I have.

This really hits the sweet spot for me.

Would love to see tutorials on more topics. In particular, a very basic lighting model but with a detailed breakdown on how normals and dot product work together.

Yes, there's plenty of info out there, but I think your teaching style would still make it worthwhile


> a detailed breakdown on how normals and dot product work together

Normals point away from faces, straight out from faces. A surface normal is the direction the surface is facing. The dot product of 2 vectors measures how "aligned" they are, indeed, one definition of the dot product is defined in terms of the angle between the two vectors.

Also, remember that the vector opperation (A - B) results in a vector going from B to A. Thus (light_direction - face_center) is a vector from the face to the light, and (light_direction - face_center).dot(face_normal) gives us a numeric value that is higher when the face is pointing towards the light source.

You'll have to look to linear algebra for a deep understanding: https://youtu.be/LyGKycYT2v0?si=lWr38mH34yGRSJpv

I also recommend an informal (but rigorous enough) textbook called "Linear Algebra: Theory, Intuition, Code" which teaches through conversational written explanations, mathematical notation, and code. GPT4 is also quite competent at teaching linear algebra and, in my experience, is able to distinguish between correct and incorrect mathematical proofs; a good textbook is still a better primary source though.


As a general rule, the third article/book/tutorial you read is the one where you "get it".


That's really nice to hear and it gives motivation to continue! Next article will probably go 3D, but it adds a lot of difficulties. I still need to find out a good small-scale but playful shader project to cover!


This is supercool! I'll work through it later when I have some free time.

I've always wanted to setup an interactive blog like this or Bartosz Ciechanowski's. Are you willing to describe your stack? I put something together with django & bootstrap once, but even that was more overhead than I liked.


“their parallel nature makes them memoryless and stateless. This translates to: “You can’t store or share data between pixels or shader executions.””

This kind of constraint is so liberating for me for some reason. It just narrows the space a lot I guess.

I also kind of love not having imports and libraries as that also really simplifies things.


> This kind of constraint is so liberating for me for some reason.

if you like this kind of a thing, erlang is a kind of thing you would like :o)


I might be wrong but there are compute shaders that can retake inputs and well, do more computing on them afaik.


Indeed it's a simplification! You can use buffers to keep information from previous states or share state between pixels (for both fragment and compute shaders). Though it can greatly reduce the performance sometimes. For an intro however, I think it's simpler to assume no memory nor shared state.


Ah yes, its good to simplify in the beginning! Cool article in any case!


You can set up shaders to access a shared global texture (and write to it). It’s more expensive and a bit to set up so it’s not totally true, but it’s a good first mental model


wow this is delightful! Is there anything you can share about how you made the page? Are the graphics in webgl? What did you use to embed the code?


ah I see it is webgl and codemirror. But if you have anything else to share about the making of the site would love to hear!


Good job on a really good intro


Beautiful, thanks for posting!


From an efficiency point of view, would this be better or worse than the css animation? Just curious.


This is ridiculously serendipitous. I was looking for a tutorial for this exact shader for a game I'm working on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: