For controlling what the CPU and RAM are doing? Yes. The graphics shader, on the other hand, is a pipeline architecture with extremely tight constraints on side-effects. The fact the shader languages are procedural seems mostly accident of history or association to me than optimal utility, and the most common error I see new shader developers make is thinking that C-style syntax implies C-style behaviors (like static variables or a way to have a global accumulator) that just aren't there.
The way the C-style semantics interface to the behavior of the shader (such as shader output generated by mutating specifically-named variables) seems very hacky, and smells like abstraction mismatch.
Not exactly shaders, but for GPGPU stuff, futhark [0] seems to show that a functional paradigm can be very good to produce performant and readable code.
> is a pipeline architecture with extremely tight constraints on side-effects
That was true 10 years ago. Now they're just tight constraints but not extremely so: there're append buffers, random access writeable resources, group shared memory, etc.
> The way the C-style semantics interface to the behavior of the shader seems very hacky
I agree about GLSL, but HLSL and CUDA are better in that regard, IMO.
For controlling what the CPU and RAM are doing? Yes. The graphics shader, on the other hand, is a pipeline architecture with extremely tight constraints on side-effects. The fact the shader languages are procedural seems mostly accident of history or association to me than optimal utility, and the most common error I see new shader developers make is thinking that C-style syntax implies C-style behaviors (like static variables or a way to have a global accumulator) that just aren't there.
The way the C-style semantics interface to the behavior of the shader (such as shader output generated by mutating specifically-named variables) seems very hacky, and smells like abstraction mismatch.