For controlling what the CPU and RAM are doing? Yes. The graphics shader, on the other hand, is a pipeline architecture with extremely tight constraints on side-effects. The fact the shader languages are procedural seems mostly accident of history or association to me than optimal utility, and the most common error I see new shader developers make is thinking that C-style syntax implies C-style behaviors (like static variables or a way to have a global accumulator) that just aren't there.
The way the C-style semantics interface to the behavior of the shader (such as shader output generated by mutating specifically-named variables) seems very hacky, and smells like abstraction mismatch.
Not exactly shaders, but for GPGPU stuff, futhark [0] seems to show that a functional paradigm can be very good to produce performant and readable code.
> is a pipeline architecture with extremely tight constraints on side-effects
That was true 10 years ago. Now they're just tight constraints but not extremely so: there're append buffers, random access writeable resources, group shared memory, etc.
> The way the C-style semantics interface to the behavior of the shader seems very hacky
I agree about GLSL, but HLSL and CUDA are better in that regard, IMO.
I would say "real time graphics" is one of the niches FP is not well suited for, most business software doesn't need to work at the level of the machine.
Ok here's a talk about making Haskell games that took place last week: https://keera.co.uk/blog/2019/09/16/maintainable-mobile-hask...
I don't deny that making games in Haskell is niche, but it's certainly possible. Frag was just an example I remembered (ten years is recent for an old git like me).
If I remember correctly, in that thesis the author mentioned explicitly that the game didn't run very fast. If you watch the video from 2008, the in-game stats list framerates >60fps but the game itself is very laggy. Maybe there is a separate renderer thread?
come on, the "games" showcased here have the complexity level of a 2003-like game and they barely achieve 200 fps on modern hardware. When I look at similar trivial things ran with no vsync on my machine, it's >10000 fps
That's just moving goalposts. The games showcased are the same complexity as plenty real world commercial games that are making good money in 2019. If you're doing triple-A game development, maybe you need to get down to the metal, but for tons of games you'll be perfectly fine with FP.
Also worth noting that the idea is to use FP around stuff like the actual game logic, and then handle rendering details imperatively.
> The games showcased are the same complexity as plenty real world commercial games that are making good money in 2019
I mean, fucking todo apps are making "good money" in 2019, it does not mean that they are good examples. These kind of presentations should improve on the state of the art, not content themselves with something that was already possible a few decades ago. No one gets into game dev to make money, the point is to make better things than what is existing - be it gameplay wise, story wise, graphics wise...
The Poker prototype could be from 30 years ago, and drops to 15FPS on any game action! Arcadia is a neat toy at this point, but run far away if you are looking to do real world commercial development.
Even assuming that that's true (and it very well may be), the general topicwasn't games, and there are many places where "the norm" in programming as a whole differs from the norm in performance sensitive areas.