These are not small pieces of code by any means :) Professional kernels used by the likes of Autocad and Siemens have fourth to fifty years of development and hundreds of thousands of engineering hours.
It’s not as simple as taking a mesh of 3D points and calling it a day. Typical kernels can add chamfers, fillets, spins, and constraints to all kinds of form representations based on mathematical descriptions of the target shapes. Getting that working reliably is super difficult!
I'm in a rush so I can't look to closely now but I have a few questions (and please forgive any stupid questions, I'm not a graphics dev, just a hobbyist):
What's the runtime like?
Is there an event loop driving the rendering? (who calls the `render` on each frame? are there hooks into that? )
FFI story?
Who owns the window pointer?
I'm interested in audio plugins, and VSTs (etc) have a lot of constrains on what can be done around event loops and window management. JUCE is pretty much the de-facto solution there, but it's pretty old and feels crufty.
The limits on what audio plugins can do is not a function of the drawing toolkit, but the fact that they do not own the event loop if the GUI is run in-process with the host. And as long as they do, they will never own the evelop loop. In addition (and mostly related to this) the top level window they appear in is owned by the host, which also inherently limits the plugin's role in window management.
If you want more, use the capability built into LV2, AU and VST3 for out-of-process GUIs for a plugin (LV2 has had this for more than a decade). CLAP has, I think, abandoned plans to support this based on lack of uptake elsewhere.
I'd hardly call JUCE "pretty old", but then I'm a lot older than JUCE. And it's likely only crufty if you're more used to other styles of GUI toolkits; in terms of the "regular" desktop GUI toolkits, it's really bad at all.
Yes I think JUCE is great, It's very well made, but it drives you into a very narrow path of either using everything in the library, or leaving you to fend for yourself (which I admit may be a normal experience for C++ devs). For instance, the ValueTrees frequently used for UI state are very powerful, but they're not very type safe (or thread safe), and they feel clunky compared to more contemporary reactive state management patterns like signals.
I'm sure folks who use ValueTrees are happy, but I don't see much advancement to that pattern being shared in the JUCE forums. If y'all have some better tricks over in the Ardour project I'd love to know! (BTW, I'm a fan of y'all's work. I really enjoyed reading some of the development resources, like the essay on handling time [0]).
The host app owns the event loop. I don't foresee that changing even once we re-architect around WebGPU (allowing the Wasm guest to control shaders), as the host app is responsible for "driving" the render tree, including passing in state (like a timer used for animations). The host app owns the window pointer, as renderlets are always designed to be hosted in an environment (either an app or a browser). Open to feedback on this, though.
FFI is coming with the C API soon!
I don't know much about audio but I see a ton of parallels - well-defined data flow across a set of components running real-time, arbitrary code. Simulations also come to mind.
> JUCE is pretty much the de-facto solution there,
Is it though? iPlug nee wdl-ol nee iPlug2 seems pretty good too. JUCE stuff has a pretty distinct and slightly obnoxious look and feel that takes a fair bit of effort to strip out
I feel like a cross platform Swift GUI would be a great thing for the world. Swift strikes a great balance between ergonomics and performance, and seems to be the only other mainstream language to have absorbed some of Rust's key features around borrowing references.
For the record, borrowed references are only going to be really usable in Swift 6 which isn't released yet.
That said, Swift's implementation of borrowing seems significantly more user-friendly than Rust's. While this is very much an advanced feature, I'd expect it to be actually used in many cases where in Rust folks would resort to working around the borrow checking (via things like indexing into arrays and such). As a result I expect it to be significantly more useful.
Browser Company sort of built this themselves for Arc, but its really a separate SwiftUI-like Windows implementation (Windows UI is separate code written in a similar style). It still feels like a technical preview though unfortunately.
I thought he was so witty and refreshing when I read AntiFragile (which I will always regard as a masterpiece), but I feel like he's ran out of ideas now and is just complaining about people he doesn't like.
I was a big believer in Haxe, it seemed perfect. The main issue I and many people had at its inception was that the ecosystem was very fragmented. The promise was "write once, run everywhere" (which is what we had with Flash), but each platform you wanted to actually compile on was a whole process, often undocumented, with lots of caveats, and having to hunt for third party community wrappers to do the native integrations you want etc.
This was ~10 years ago, I'm sure the current state of the ecosystem is very different (no idea), but this is the reason why I was never able to adopt it, and I heard the same thing from dozens of Flash creators at the time.
There's a lot of active research around rendering 2-d vector graphics with GPU tessellation (Raph Levien's work for instance), so this is pretty cool that they're shipping a product with this technique.
I've never used Rive so I'm wondering if its strictly for making cool animations or if it can be used for building dynamic UI's (the kind that you might use an immediate mode gui lib for)?
Bevy is written in Rust and according to https://github.com/rive-app/rive-bevy/, the backend used for the integration uses Vello (also Rust), not the Rive renderer. Could be that integrating Vello into the C++-based Godot would be finicky. With the Rive renderer open-sourced, maybe both Rive and Godot will see an integration using the Rive renderer?
One of the things that makes Rive great for dynamic UI components is the excellent state machine deeply supported by the editor: https://help.rive.app/editor/state-machine
While I've built some fairly complex UI with Rive, one area I haven't explored is programmatically adding elements or say changing UI text based on external events.