Hacker News new | past | comments | ask | show | jobs | submit login
Three.js Shading Language (github.com/mrdoob)
160 points by bpierre 9 months ago | hide | past | favorite | 47 comments



TSL is not really a language to be pedantic, because it is an EDSL to build a common AST that can be translated to the target shader language later. All caveats for the overly eager evaluation and usual impedance mismatch between JS and TSL data model also apply here.


Without discounting any of your other points, I just have to note the irony in saying:

> TSL is not really a language to be pedantic, because it is an EDSL

where the "L" in "EDSL" stands for "language" :)


Just like any sufficiently complex program inevitably grows to have both a reimplementation of Lisp and a homegrown email client... any sufficiently complex library inevitably grows to have people arguing about whether it deserves the term language or not!

EDIT: I wish I were smart enough to have come up with these references, but I'm entirely riffing on https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule and https://en.wikipedia.org/wiki/Jamie_Zawinski#Zawinski's_Law . I can say from personal experience that both adages are resoundingly true.


I've seen this done in C++ for generating GLSL before a few times.

One from Waterloo Univeesity in the early 2000s became RapidMind: http://web.cs.ucla.edu/~palsberg/course/cs239/papers/rapidmi...

There were a few others, but I can not remember their names.

Generally these techniques work, but for some reason they tend to die out.


> for some reason they tend to die out

I think the reason is questionable value.

Native GPU programming languages, HLSL and GLSL, have a simple and readable syntax. The complexity is not in the language, it’s in the runtime and surrounding infrastructure: weird execution model with these wavefronts and interleaved programmable/fixed function blocks in the graphics pipeline, weird memory model with load/store coalescence and manually managed groupshared, weird IO without even printf() available, and poor tooling (I think only CUDA did it right).

Replacing the programming language does nothing with execution model, memory model, and IO. These quirks come from the hardware not software and are inevitable. Custom languages break tooling: RenderDoc HLSL debugger is not great but much better than nothing. Sometimes they break other things as well, for example WebGPU derived their shader language from Rust which is IMO unreadable by design.


> I think the reason is questionable value.

It's my hope, having casually followed the development of the three.js shader nodes and now TSL, that they will end up getting wrapped up into a visual node editor of high quality. As far a I'm aware, they have made some attempt at following the MaterialX spec which should help here.

So even if writing code in TSL has limited utility over pure shader code (I don't have an opinion here but I've seen it brought up a few times), it should make writing a three.js visual node editor much simpler.


That is the BabylonJS approach.


> for example WebGPU derived their shader language from Rust which is IMO unreadable by design

The reason why typical Rust code is so unreadable is not because the type is on the right side ;)

The same 'readability argument' could be brought up with HLSL and MSL too because of their similarity to C++ - "modern C++" code typically isn't very readable either unless the reader closely followed C++ development since around 1998 or so.

FWIW I consider WGSL syntax a pretty good balance between simplicity and convenience (much better than "GLSL 300 es" anyway).


The complexity absolutely is in the language, otherwise you wouldn't have 4 or 5 different ways to pass data in and out of a shader, and turning e.g. a mesh renderer into a wireframe renderer would be a simple case of function composition.

The surrounding infrastructure is atrocious on top of that, but the idea that shading languages are fine is just off-base. They don't have real module systems, they don't let you compose, and the norm is still just to use preprocessor hacks to produce shader variants.

(this is why I built Use.GPU around a modular WGSL linker)


You're essentially building an AST via function calls, this is a lot less ergonomic than having a custom programming language to describe the AST (especially when building conditions, loops and complex math expressions).

It could be a good base for a noodle graph shader editor though (or even for a language compiler outside of three.js which compiles to "TLS" Javascript code - but at that point (of having an offline compile step) why not get rid of the middleman and compile from (yet another) custom shading language to GLSL or WGSL directly).


a lot of game engines have higher level shading languages (often visually programmed) that are long-lived and widely used.


Looks interesting! There is always a tension between high level paradigms like this and just writing the damn GLSL. glslify is another solution in this space that lets you have more fine control over the GLSL, which is usually way less verbose than a high level API.


One major reason for creating TSL is that it's no longer just about WebGL and GLSL. There's also WebGPU and WGSL which is very different to GLSL. The goal of TSL is to compile to both of those plus any future targets.


Almost like the old days of fixed function pipelines. The example at the top seems a little contrived, I wonder why you would build a shader like that.


It’s common in lots of modern tools to build shader trees using UI, this is analogous to that. It’s not really fixed function because you can make your tree/shader as big as you want and it has all of the math functions that GLSL has. Fixed function usually means something like a limited fixed set of nodes and you can turn the constant knobs.

Unless I missed it somewhere, the two nodes that seem to be missing here are an if() node and a loop() node. I’m guessing a loop() node can be done on the JavaScript side for small loops - it’d be the same as loop unrolling. For big loops or loops with unknown bounds at compile time, it’s not clear if there’s a way with TSL. You can simulate if() behavior using the nodes provided, but you can’t avoid executing both sides of the if().

It’s been a long time since I wrote a three.js shader, so I’m not sure what the “old” example is doing. It looks like it might be a little dynamic code injection to convert from a typical GLSL shader to something three.js needs.


> big loops or loops with unknown bounds at compile time

That’s not allowed in webgl, so I doubt TSL can make it work somehow.


What do you mean, and are you sure? It works in ShaderToy. Big loops will happily unroll until the compiler crashes. There’s a standard trick for doing variable length loops (and preventing unrolling): https://shadertoyunofficial.wordpress.com/2017/11/19/avoidin...


Some of the other examples in Three.js are quite a bit more complex. https://threejs.org/examples/#webgl_lights_physical

Probably just for introductory ease. Although, admittedly, appears to give the impression of nerfed features, rather than easy.


Huh? It allows for programming via JavaScript which is then transpiled for WebGL or WebGPU. It doesn't seem fixed function at all.


It does look quite similar to how pixel rendering was configured in OpenGL 1.x or Direct3D up to D3D7 (you basically had a set of hardwired pixel operations which you could stack and combine via API calls).

Funny enough, the new work graph stuff in D3D12 is also configured in a similar way (by essentially building an AST-like tree of operations via API calls).

We really came around full circle ;)


Yeah I get what it is, I was just making an observation of how the api appears.


By this logic we could call any API "fixed function" though.


This is what happens when everyone decides to break compatibility with the shading languages for the Web, instead of what happened with Khronos APIs on the native side.

On the other side, it is business as usual given GLSL, HLSL, MSL, Slang, PSSL,...


What do you mean? This is GLSL… a light abstraction on top of it anyway, but not a different shading language or breaking compatibility. The goal is just to write the GLSL shaders in JavaScript.


WebGPU uses WGSL, completely different shading language, Rust inspired.

Naturally ThreeJS has to now take care how to provide shaders that work across both backends.


GLSL is pretty bad though. Terrible typing system, terrible inference, terrible archaic requirements. Awful annotations.

Metal and HLSL are more modern. Mostly because Metal started as such and HLSL was updated. Both C++ inspired or even using clang. WGSL has a rust-like syntax but Metal-like semantics and structure.


We are all aware of the differences, that wasn't the point.


> The goal is just to write the GLSL shaders in JavaScript.

I bet the main goal is generating both GLSL and WGSL shaders depending on the 3D backend. Usually shader language transpilation is done through native libraries like SPIRVCross (C++), Tint (C++) or Naga (Rust), but those are too heavy for being included in a three.js webpage and offline compilation probably doesn't quite fit three.js' spirit (or requirements if they need to stamp out a different shader variation based on runtime parameters).


What if we stopped adding layers of abstraction? The idea of renderer-agnostic shaders is cool, but in general you want to write your shaders for a specific renderer for best performance.


If you're writing for three.js and GLSL you've already forgone best performance. This is for render agnostic web code - it could be running on anything from a low end android tablet to a gaming PC, in any browser engine new enough to support WebGL. Most developers who use it are looking for a usable API that works everywhere possible, and they don't really care that much about perf because it'll be faster than all the alternatives (JS, canvas, SVG, CSS) even if it's pretty slow for a GPU.


What if we have different objectives?


If the alternative is to manually maintain 2..10 (or so) versions of each shader for different shading languages, GLSL versions or shader variations manually, then code generation or transpilation from a common source is a pretty good compromise.


One thing I'd love to see is whether bundlers can "statically compile" the TSL nodes via constant propagation into a single template literal.


This is the kind of thing the three.js team spend a lot of time discussing and working towards (far too much time in my opinion, I'd much rather the effort was spent on improving the ecosystem, although thankfully the poimandres collective jumped into to fill the gap, for React at least) so I'd say it's reasonably likely.


I wonder if it's possible to have a typescript library for making these shaders at the type level, such that at compile time, each variation of the shader can be generated as a static constant string with at most a finite number of concatenations etc.


Maybe, but not from Three.js. They are adamantly against Typescript.


It’s so sad too because it adds so much unnecessary suck to the project.


Has anyone taken a crack at building a shading language debugger?

I want to be able to step through a shader and see the values at each step (ideally for any / every pixel)

The best case scenario would be while it's actually running, but simulating it seems like a great solution too.

These are the best I could find: https://github.com/burg/glsl-simulator and https://github.com/cdave1/shdr - both seem to be missing a fair amount of features, though shdr seems like much better support.

There's also: https://glsl-debugger.github.io/ but it doesn't support OS X and feels quite out of date.

There's "print statements" for wgsl: https://github.com/looran/wgsl-debug

I guess WGSL is the up and comer... as opposed to GLSL - but both would be great.

Anyone else have this hope/dream?

The hacks I use today are crazy - like conditionally exiting early based on a time variable and mapping values to colors.

So many folks I talk to are like, "yeah that's how it is". But this just seems not good enough to me...


RenderDoc has step-through debugging at a pixel and fragment level. Click a pixel in the render target view, then open the history and select the fragment to debug.


I think with that threejs' TSL approach there is an opportunity to simulate computations in JS instead of translating it to GLSL. Just need to write an eagerly-evaluating runtime for those EDSL expressions.

Once it's done, you'll be able to use the regular Chrome's Devtools debugger to step through TSL logic.

A nice thing is that would also allow you to unit-test your shading code using regular JS frameworks for testing.


There plenty of options in native APIs, the problem is that browser vendors keep ignoring 3D tooling.

The only thing that exists is SpectorJS and is barely maintained.

So you're left with having a native implementation of the 3D coding, face the challenge to differentiate between browser and web app calls, or classical print pixel debugging.

I fully agree that is crazy Khronos and browser vendors don't see this as an issue, after a decade.


Safari has a (somewhat basic) canvas debugger which also works for WebGL. Firefox had one but it was removed "due to lack of use" (wtf Mozilla?): https://firefox-source-docs.mozilla.org/devtools-user/deprec...


There is good tooling available, but it tends to be vendor-specific and sometimes quite expensive. Here is Arm's tool: https://developer.arm.com/Tools%20and%20Software/Graphics%20...


If you’re ok with debugging shaders themselves outside an engine, SHADERed does an excellent job cross-platform: https://shadered.org/blog


Looks super promising.

Don't see a OS X client and the web version seems quite buggy - a number of the ones I tried to open loaded infinitely (with errors in the console).

I'll give it a shot on Windows...

Edit: Yeah, this is certainly a step forward compared to what I've been doing.

Being able to choose a quad/triangle/pixel and step through breakpoints is fantastic.

Not the best interface / ux, but it's motivating.


Native graphics debuggers like those integrated into Xcode (for Metal) or Visual Studio (for D3D) allow shader step-debugging, as well as RenderDoc (but AFAIK not for all 3D APIs).


My first thoughts as a technical artist is that TSL should be abstracted to a VPL like most shader artists are used to whether it's the nodes in Unreal, Blender, Vray etc. They're already starting with three.js playground, but this would allow any traditional shader artists in gamedev or really any industry related to 3D to create really awesome shaders using the same exact workflow they're used to on other apps.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: