This reminds me of node-based compositing, which is mostly standard in the film industry but, to the best of my knowledge, never made it to still-image editing applications. After doing things the nodal way, it’s hard for me to use Photoshop nowadays and have to bake-in certain changes.
Oldie but goodie is Nodebox[0] and for data-driven/dynamic compositing see Cables.gl[1]. Kirell Benzi's work uses the latter and is nothing short of breathtaking[2].
Are you asking if it's well known? I don't think it is actually used in any context, it is so unstable that if you start it and do nothing, it will crash in a few seconds.
Indeed, it's weird that nobody has brought node-based editing to regular image manipulation before our project, but that's our goal with Graphite. With the equally important goal of making the node aspect optional for users by building such capable tooling that it can abstract away the node graph for all conventional image editing use cases, allowing users to work purely with the WYSIWYG tools that secretly manage the node graph behind the scenes. Until they're proficient enough with the traditional tools to start learning the power of node-based editing.
That said, we've been building our engine so far with a focus mostly on vector editing. There are some raster nodes but they're pretty limited right now (no GPU support yet, more engineering to do until that's production-ready). The raster compositing and image manipulation nodes + tools + infrastructure will be a big part of the focus for next year on the roadmap (https://graphite.rs/features/#roadmap).
I wonder, where do SVG filters fall on the vector/raster spectrum? I really like that I can tune them hands-on in Inkscape (e.g. fractal noise + displacement map), and then use it anywhere that supports SVG. A little interactive demo from a while back:
We will have a large variety of filters and a subset of them will be implementations of all the SVG filters. Separate to our regular raster render process that's used for displaying content to the screen, we'll have an SVG render process used for exporting the graph data to an SVG file. That process will be specifically designed to preserve the purity of the graph data representation such that it encodes all possible operations in the SVG format directly, and resorts to progressive degradation as required to approximate things SVG can't natively represent. That might mean rasterizing some unsupported raster filters, but the ones SVG supports would be used natively.
First in the sense that there's nothing in the industry that's a real product. Certainly there are various experimental concepts people have made as a hobby in a limited capacity, but they don't really count as generally useful tool suites. There's nothing even from the commercial software side of the industry either, which I find surprising. But it gives our project an advantage.
Wow, looking at the demos on the website, I am insanely impressed at just how fast the editor loads into them, and just how snappy the procedural editing is, on my mid-range smartphone no less. That's genuinely inspiring! As someone who this week has had the itch to A) learn rust, B) use webassembly for something, and C) pick up Svelte for something, this was a really cool thing to see this morning :)
Thanks, we'd love to assist you in getting involved with contributing to our project! It's something we take pride in, making it more accessible than most other open source projects to begin contributing to. Come join our Discord (link is on the website home page) and introduce yourself like in your comment here. Cheers!
I believe that's plausibly within the range is possible outcomes— which isn't true for any other project like Gimp, which has had its window of opportunity rise and then set forever.
I’ve been interested in those “boxes and lines” frameworks for a long time. For instance numerous data transformation tools like Alteryx and KMIME and also
LABView.
Node-based systems are used in music for synthesis and effects extensively and have been so since it was feasible to process digital audio in real time. In the 60s electronic music pioneers put together analog oscillators on a patchboard. Today musicians do the same in a screen with digital operators that are accurate and stable enough to build systems (like the Yamaha DX7) that couldn’t really be built from analog parts.
It is clear how to write a compiler for that kind of graph and probably less struggle than manually writing a “shader function”
int16 amplitude(int32 time)
that composes a number of large-scale functions (resample, decimate, reverb, add, …) that are implemented using various strategies. Operator graphs can be compiled to target CPU, SIMD, DSP, MPI, GPU, Spark, etc.
The dominant paradigm in graphics is still shader programs, however.
which allows programmatic 3D modeling using nodes/wires. It exposes _all_ of OpenSCAD (last I checked) and is quite extensible (I use it to control a Python-enabled version of OpenSCAD https://pythonscad.org/ in an effort to make DXFs and G-code: https://github.com/WillAdams/gcodepreview )
Most of what people do in photoshop is easier in a node based system anyway since you can made a non destructive graph of operations. The biggest downside is that the best program is nuke, which is expensive, but anyone can use fusion for free or pay a few hundred dollars for a lower tier of houdini and use those image manipulations.
Houdini also allows vex shaders out of nodes, which is basically a more polished version of this interface where you can manipulation pixels more directly and make your own nodes.
Video consists of many frames and you have to apply the same but slightly different transformations to each frame. Building a pipeline (via nodes or not) to describe these repeated changes is worth the extra effort.
Outside of batch jobs, image editing tasks are generally one offs with image-specific actions and building a change pipeline is unnecessary work.
At the end of the day, both workflows are different tools, like hammer vs. mallet.
Afaik there is a small photo/video editor called CameraBag which displays adjustments and filters as little boxes laid out in a row and you can enable or disable them.
forgive my ignorance but does what you’re talking about with the “node-based compositing” basically boil down to how blender does it’s editing in a way?
Nuke and DaVinci Resolve are the industry standards for compositing. Node based editing graphs are often called “non-destructive” or “procedural”. It’s basically pure functions in a programming sense.
Blender geometry nodes take this approach for modeling. The rest of blender is destructive in that any operation permanently changes the state.
I'm still butthurt about when Blender introduced a node editor and confused me, I lost all my Blender expertise at that point. (The persistence of a vestigial old way of doing things only makes it worse, because of course I want to try to do everything without nodes, and then I don't have any guidance because all up-to-date docs and tutorials talk about nodes all the time. Nodes! Ruining everything!)