Hacker News new | past | comments | ask | show | jobs | submit login
We need visual programming. No, not like that (sbensu.com)
1178 points by stopachka 60 days ago | hide | past | favorite | 494 comments



As someone with a hardware background, I'll throw in my $0.02. The schematic capture elements to connect up large blocks of HDL with a ton of I/O going everywhere are one of the few applications of visual programming that I like. Once you get past defining the block behaviors in HDL, instantiation can become tedious and error-prone in text, since the tools all kinda suck with very little hinting or argument checking, and the modules can and regularly do have dozens of I/O arguments. Instead, it's often very easy to map the module inputs to schematic-level wires, particularly in situations where large buses can be combined into single fat lines, I/O types can be visually distinguished, etc. IDE keyboard shortcuts also make these signals easy to follow and trace as they pass through hierarchical organization of blocks, all the way down to transistor-level implementations in many cases.

I've also always had an admiration for the Falstad circuit simulation tool[0], as the only SPICE-like simulator that visually depicts magnitude of voltages and currents during simulation (and not just on graphs). I reach for it once in a while when I need to do something a bit bigger than I can trivially fit in my head, but not so complex that I feel compelled to fight a more powerful but significantly shittier to work with IDE to extract an answer.

Schematics work really well for capturing information that's independent of time, like physical connections or common simple functions (summers, comparators, etc). Diagrams with time included sacrifice a dimension to show sequential progress, which is fine for things that have very little changing state attached or where query/response is highly predictable. Sometimes, animation helps restore the lost dimension for systems with time-evolution. But beyond trivial things that fit on an A4 sheet, I'd rather represent time-evolution of system state with timing diagrams. I don't think there's many analogous situations in typical programming applications that call for timing diagrams, but they are absolutely foundational for digital logic applications and low-level hardware drivers.

[0]: https://www.falstad.com/circuit/


As much as I prefer to do everything in a text editor and use open-source EDA tools/linters/language servers, Xilinx's Vivado deserves major credit from me for its block editor, schematic view, and implementation view.

For complex tasks like connecting AXI, SoC, memory, and custom IP components together, things like bussed wires and ports, as well as GUI configurators, make the process of getting something up and running on a real FPGA board much easier and quicker than if I had to do it all manually (of course, after I can dump the Tcl trace and move all that automation into reproducible source scripts).

I believe the biggest advantage of the Vivado block editor is the "Run Block Automation" flow that can quickly handle a lot of the wire connections and instantiation of required IPs when integrating an SoC block with modules. I think it would be interesting to explore if this idea could be successfully translated to other styles of visual programming. For example, I could place and connect a few core components and let the tooling handle the rest for me.

Also, a free idea (or I don't know if it's out there yet): an open-source HDL/FPGA editor or editor extension with something like the Vivado block editor that works with all the open source EDA tools with all the same bells and whistles, including an IP library, programmable IP GUI configurators, bussed ports and connections, and block automation. You could even integrate different HDL front-ends as there are many more now than in the past. I know Icestudio is a thing, but that seems designed for educational use, which is also cool to see! I think a VSCode webview-based extension could be one easier way to prototype this.


> Also, a free idea (or I don't know if it's out there yet): an open-source HDL/FPGA editor or editor extension with something like the Vivado block editor that works with all the open source EDA tools with all the same bells

"Free idea: do all this work that it takes hundreds of people to do. Free! It's even free! And easy!"

Lol you must be one of those "the idea is worth more than the implementation types.


> The schematic capture elements to connect up large blocks of HDL with a ton of I/O going everywhere are one of the few applications of visual programming that I like.

Right. Trying to map lines of code to blocks 1 to 1 is a bad use of time. Humans seem to deal with text really well. The problem becomes when we have many systems talking to one another, skimming through text becomes far less effective. Being able to connect 'modules' or 'nodes' together visually(whatever those modules are) and rewire them seems to be a better idea.

For a different take that's not circuit-based, see how shader nodes are implemented in Blender. That's not (as far as I know a) a Turing complete language, but it gives one idea how you can connect 'nodes' together to perform complex calculations: https://renderguide.com/blender-shader-nodes-tutorial/

A more 'general purpose' example is the blueprint system from Unreal Engine. Again we have 'nodes' that you connect together, but you don't create those visually, you connect them to achieve the behavior you want: https://dev.epicgames.com/documentation/en-us/unreal-engine/...

> I don't think there's many analogous situations in typical programming applications that call for timing diagrams

Not 'timing' per se (although those exist), but situations where you want to see changes over time across several systems are incredibly common, but existing tooling is pretty poor for that.


"Schematics work really well for capturing information that's independent of time, .." This spells out what always irked me about graphical software.


There's no reason they can't instead be used to show how data transforms. The sort of 'flow wall' someone sees in a large industrial setting (think water/waste water treatment plants, power plants, chemical plants, etc) or process mockup diagrams for spreadsheet heavy modpacks (I'm looking at you GregTech New Horizons).

Data can instead be modeled as inputs which transform as they flow through a system, and possibly modify the system.


Building block diagrams in Vivado to whip up quick FPGA designs was a pleasant experience. Unfortunately the biggest problem wasn't the visual editor. The provided implementations of the AMD/Xilinx IP cores are terrible and not on par with what you would expect first party support to be. The other problem was that their AXI subordinate example was trash and acted more like a barrier to get started. What they should have done is acquire or copy airhdl and let people auto generate a simpler register interface that they can then drag and drop.


Amazing, thank you for taking the time


i remember using the falstad sim constantly at university a decade ago. super helpful and so much more intuitive than any spice thing. cool to see that it's still around and used


I think we need to differentiate: Visualize a program vs. Visually program.

This post seems to still focus the former while an earlier HN post on Scoped Propagators https://news.ycombinator.com/item?id=40916193 showed what's possible with the latter. It specifically showed what's possible when programming with graphs.

Bret Victor might argue visualizing a program is still "drawing dead fish".

The power of visual programming is diminished if the programmer aims to produce source-code as the final medium and only use visualization on top of language. It would be much more interesting to investigate "visual first" programming where the programmer aims to author, and more importantly think, primarily in the visual medium.


I think there's a very important real-world nuance here.

What you want with a programming language is to handle granular logic in a very explicit way (business requirements, precise calculations, etc.). What this article posits, and what I agree with, is that existing languages offer a more concise way of doing that.

If I wanted to program in a visual way, I'd probably still want / need the ability to do specific operations using a written artifact (language, SQL, etc). Combining them in different ways visually as a first-class operation would only interest me if it operated at the level of abstraction that visualizations currently operate at, many great examples of which are offered in the article (multiple code files, system architecture, network call).


My son who started programming at 7 pretty quickly moved on from languages like Scratch and Tinker. To the extent to which he uses them at all, it’s mostly to play games that are available in them. I’m not entirely convinced that he couldn’t have just started with Javascript or Python. It’s not like learning the syntax for a for loop¹ is that much harder than arranging the blocks in one of those block languages.

1. Although I must confess that I have a mental block about the second and third components of a C-style for-loop and whenever possible, I avoid them if I can.


> Although I must confess that I have a mental block about the second and third components of a C-style for-loop and whenever possible, I avoid them if I can.

Glad I'm not the only one! Despite programming for over a decade, I still mix up the order of `update` and `condition` sometimes in `(initialization, condition, update)` for loops. Probably because I spent too much time with Python and became so accustomed to only using `for x in y` style loops.


It has definitely pushed me to prefer foreach style loops in my coding which, I think, makes the code in general better (and, when writing in rust, faster as the generated code is able to eschew bounds checks).


tbh its pretty much a big thing when 7 year old is intrested in any programming language. i am not a parent but in this age of ipad kids , that is really impressive as a parent.


Is Visual Basic still a thing? That was my start and it always felt like a good intro language. It was limiting but you could still make "real" desktop apps.


The dead fish metaphor is so interesting because programs aren’t static objects, they move.

Most visual programming environments represent programs in a static way, they just do it with pictures (often graphs) instead of text.

Perhaps there is something to be discovered when we start visualization what the CPU does at a very low level, as in moving and manipulating bits, and then build visual, animated abstractions with that.

A lot of basic bit manipulations might be much clearer that way, like shifting, masking etc. I wonder what could be built on top to get a more bird‘s eye view.


Yes, most diagrams are frustratingly static, even those that over lay step by step information on top.

I've found "diagrams" in my head, the mental models I use to reason about a problem, are not static. They are abstract machines, with cogs and gears and clutches and input shafts and output shafts, colors and arrows and action and movement, that work like a map for finding the solution, either directly, or at least leading me in the right direction to find somewhere to easily test my map/model/diagram/machine and probably improve it if I find somewhere it's less than ideal.

The issue is, many of those are not models or diagrams I ever got out of a book or a website. They're all painstakingly, agonizingly built up over years of failure and years of struggle and troubleshooting and riding blind into the thorny thicket of an issue, blindly feeling around for a solution and if I'm lucky, integrating that into a part of my mental model.

Even something like reading XML or JSON involves a good deal of visualized movement or color to be able to parse it quickly, something that no spec or tutorial ever bothers with, if they even could.

All I know is pointers never made sense until I had a class on assembly and was able to step through my program with a view of my entire program memory open and being forced to choose addressing mode for a memory reference before it clicked. Draw all the arrows you want in a textbook, but it wasn't until I saw it moving that I understood the machine.

Same with more complex stuff like Kubernetes. Like ok, draw me a shit load of boxes with labels on them like "LoadBalancer" and "Storage" in them, but if you don't relate that to the 500 line YAML I'm told to blindly apply, I still don't have a model of how any of it works.


The pointer thing is so relatable.

I don’t think we share exactly the same inner visuals, but they all relate to some intuitive understanding.

For me there are these unique and abstact visuals but they blend in with code (text) often. When I‘m intimate with a program I can often visualize line by line and fix bugs or change its behavior.

However, the things that‘s most removed from textual representation is data, memory, I/O (streams etc.) and it all moves.


> Yes, most diagrams are frustratingly static

Most source code is static. In case you want to show a diff, you normally do this with a side-by-side view, regardless of whether you show the diff as textual source code, or as two diagrams.

The transformations you sometime see in Youtube videos of moving and removing small bits of code to e.g. show the differences between a piece of functionality in an object oriented vs functional language are only useful because they require your eyes to follow relatively few points of interest.


Source code is of course a static thing. It's text. It only changes during commits.

Code is dynamic. It runs. The abstractions in the machine are doing things, which are changing things over time, in many different places, and it can be difficult to keep those movements all in your head without either a ton of practice, or lots of practice and some decent foundation to build off of.


> what the CPU does at a very low level,

Careful with what you wish for. Below the ISA abstractions there are endless nightmares of realities created and destroyed, time flowing in multiple directions, and side effects of other realities you can almost see, but won’t.


Bret Victor might argue visualizing a program is still "drawing dead fish".

The power of visual programming is diminished if the programmer aims to produce source-code as the final medium and only use visualization on top of language.

I disagree. We frequently break up large systems into chunks like modules, or micro-services, or subsystems. Often, these chunks' relationships are described using diagrams, like flowcharts or state transition diagrams, etc.

Furthermore, quite often there are zero direct code references between these chunks. Effectively, we are already organizing large systems in exactly the fashion the op is proposing. Inside each chunk, we just have code. But at a higher level viewpoint, we often have the abstraction described by a diagram. (Which is often maintained manually, separate from the repo.)

What exactly are the disadvantages here?


> We frequently break up large systems into chunks like modules, or micro-services, or subsystems. Often, these chunks' relationships are described using diagrams, like flowcharts or state transition diagrams, etc.

We frequently break up large systems into chunks like modules, or micro-services, or subsystems. Often, these chunks' relationships are documented using diagrams on a high level (like flowcharts or state transition diagrams etc.), but are not executable.

Fixed it for you.


> but are not executable.

Fixed it for you.

Dude, if you say the flow in the diagram is not executable, blanket in any fashion, then are you saying all of the programming projects you've been in are either monolithic systems, or have all failed?


> (Which is often maintained manually, separate from the repo.)

To me, this is the interesting avenue for investigation.

Rather than go from visualization -> code, how can we take an existing visualization that represents some underlying system (a code base, module dependencies, service architecture, a network topology, etc) and easily update the representation as the underlying system changes...


>I think we need to differentiate: Visualize a program vs. Visually program.

Not necessarily, programming with visual DSL is already a thing in the field of language oriented programming. Visual programming refers to different thing, but not impossible to make a connection between the two fields.

Visual programming is now more like umbrella term for projects (and research) exploring new ways of programming beyond the textual representation. Probably better to call it non-textual programming, because some of its ideas not tied to visuality, like structural editing.

Visual programming enviraments offers a concrete way to program general-purpose code, DSLs offers a very specific language to program in a domain (language orinted programming offers ways to invent these DSLs). Often visual programming applied to a specific domain, as an alternative to textual scripting languages. Maybe this confuses people, thinking they are less powerfull, non general-purpose.

What described in the article is a visual DSL based on diagrams, using as a source for the programming itself (which is exactly the same as UML). But the whole thing are not well thought, and I think only serves the purpose of dunk on visual programming or the people how are working on them for "not understanding what professional programmers need".


The power of visual programming is diminished if the programmer aims to produce source-code as the final medium

Why would that be true?

It would be much more interesting to investigate "visual first" programming where the programmer aims to author, and more importantly think, primarily in the visual medium.

What advantages would that give? The disadvantages are so big that it will basically never happen for general purpose programming. Making a brand new language make any sort of inroads in finding a niche takes at least a decade, and that's usually with something updating and iterating on what people are already doing.


> I think we need to differentiate

My read of this post (especially the title) is the author does differentiate normally but chose to blur the lines here for a narrative hook & a little bit of fun.


I think quite interesting starting point is general purpose visual medium which is good enough to be used for programming, too.

Aka: more visual/structured medium to some use cases we use text today.


One can start from typical UIs and start tinkering from there why it isn't good enough for programming.

Good first step is to notice that we dont have even static data objects. Still UIs are full of them (forms) but you cannot copy paste or store them as a whole, everything is ad-hoc. Now imagine that every form could be handled like Unity scriptable object. And maybe something what prefab variants do: data inheritance.


Great article. Any sufficiently complex problem requires looking at it from different angles in order to root out the unexpected and ambiguous. Visualizations do exactly that.

This is especially important in the age of AI coding tools and how coding is moving from lower level to higher level expression (with greater levels of ambiguity). One ideal use of AI coding tools would be to be on the lookout for ambiguities and outliers and draw the developer's attention to them with relevant visualizations.

> do you know exactly how your data is laid out in memory? Bad memory layouts are one of the biggest contributors to poor performance.

In this example from the article, if the developer indicates they need to improve performance or the AI evaluates the code and thinks its suboptimal, it could bring up a memory layout diagram to help the developer work through the problem.

> Another very cool example is in the documentation for Signal's Double Rachet algorithm. These diagrams track what Alice and Bob need at each step of the protocol to encrypt and decrypt the next message. The protocol is complicated enough for me to think that the diagrams are the source of truth of the protocol

This is the next step in visualizations: moving logic from raw code to expressions within the various visualizations. But we can only get there bottom-up, solving one particular problem, one method of visualization at a time. Past visual code efforts have all been top-down universal programming systems, which cannot look at things in all the different ways necessary to handle complexity.


> Any sufficiently complex problem requires looking at it from different angles in order to root out the unexpected and ambiguous. Visualizations do exactly that.

To me, this is an underappreciated tenet of good visualization design.

Bad/lazy visualizations show you what you already know, in prettier form.

Good visualizations give you a better understanding of things-you-don't-know at the time of designing the visualization.

I.e. If I create a visualization using these rules, will I learn some new facts about the "other stuff"?


agreed, though bad/lazy visualizations can still be useful for a cache. you know it today, but you might forget tomorrow


> Bad memory layouts are one of the biggest contributors to poor performance.

This will depend on the application, but I've encountered far more of the "wrong data structure / algorithm" kind of problem, like iterating over a list to check if something's in there when you could just make a map ("we need ordering": sure, we have ordered maps!).


The social problem with visual programming is indeed the same as with "Mythical Non-Roboticist". But there is quite some issues on it on the technical side too:

- Any sufficiently advanced program has non-planar dataflow graph. Yes "pipelines" are fine, but anything beyond that - you are going to need labels. And with labels it becomes just like plain old non-visual program, just less structured.

- Code formatting becomes much more important and much harder to do. With textual program representation it is more or less trivial to do auto-formatting (and the code is somewhat readable ever with no formatting at all). Yet we still don't have a reliable way to layout a non-trivial graph so that it doesn't look like a spagetti bowl. I find UML state machines very useful and also painful because after every small edit I have to spend ten minutes fixing layout.

- Good data/program entry interfaces are hard to design and novel tools rarely do a good job of it the first time. Most "visual" tools have a total disaster for a UI. Vs. text editors that were incrementally refined for some 70 years.


+1

I'd add versioning and diff tools as another critical advantage for text. If your visual tool can't provide a superior diff experience, then it's dead on arrival for most serious projects.


I have a hard time trying to convince people to use things like PlantUML to “write” diagrams, but Gliphy is much too popular.


I wish Github READMEs had native PlantUML support. My go to is MermaidJS for that reason.


Mermaid is a bit more limited.


> Any sufficiently advanced program has non-planar dataflow graph.

For some reason this reminded me of the elevated rails coming in the next Factorio update. Maybe visual editors need something similar? Even Logisim can distinguish between a node (three or more wires join) and two wires that just cross without interacting.


I mean it's easy to make the compiler see the crosses, but it's much harder for the user to trace these (and parallel busses too).


> painful because after every small edit I have to spend ten minutes fixing layout.

PlantUML solves this.


I disagree. As someone who likes PlantUML specifically because it lets me store diagrams as code, I too frequently end up spending just as much time trying to indirectly coerce the thing into the layout I want since it won’t let me position things explicitly.


Fair enough, I guess we have different opinions on what "fixing the layout" really means :)


I am surprised I have not seen LabView mentioned in this thread. It is arguably one of the most popular visual programming languages after Excel and I absolutely hate it.

It has all the downsides of visual programming that the author mentions. The visual aspect of it makes it so hard to understand the flow of control. There is no clear left to right or top to bottom way of chronologically reading a program.


I agree.

LabView’s shining examples would be trivial Python scripts (aside from the GUI tweaking). However, it’s runtime interactive 2D graph/plot widgets are unequaled.

As soon as a “function” becomes slightly non trivial, the graphical nature makes it hard to follow.

Structured data with the “weak typedef” is a minefield.

A simple program to solve a quadratic equation becomes an absolute mess when laid out graphically. Textually, it would be a simple 5-6 line function that is easy to read.

Source control is also a mess. How does one “diff” a LabView program?


When I had some customers working with it a few years ago, they were trying to roll out a visual diff tool that would make source control possible.

I don't know if they ever really delivered anything or not. That system is such an abomination it drove me nuts dealing with it, and dealing with scientists who honestly believed it was the future of software engineering and all the rest of us were idiots for using C++.

The VIs are really nice, when you're connecting them up to a piece of measurement hardware to collect data the system makes sense for that. Anything further and it's utter garbage.


How does one “diff” a LabView program?

Take a look at FME, another visual 'programming language'. They've done a lot of work with their git integration, including diffing and handling merge conflicts.

https://docs.safe.com/fme/html/FME-Form-Documentation/FME-Fo...


Python's equivalent of LabView would be Airflow. Both solve the same CS problem (even though the applications are very different).

Airflow it almost universally famous for being a confusing, hard to grasp framework. But nobody can actually point to anything better. But yeah, it's incomparably better than LabView, it's not even on the same race.


> Source control is also a mess. How does one “diff” a LabView program?

With LabVIEW, I'm not sure you can. But in general, there are two ways: either by doing a comparison of the underlying graphs of each function, or working on the stored textual representations of the topologically sorted graphs and comparing those. On a wider view, in general, as different versions of any code are nodes in a graph, a visual versioning system makes sense.


> How does one “diff” a LabView program?

LabVIEW has a built-in differ.


And Simulink. I lost years in grad school to Simulink, but it is very nice for complex state machine programming. It’s self documenting in that way. Just hope you don’t have to debug it because that’s a special hell.


I quite like Simulink because it's designed for simulating physical systems which are naturally quite visual and bidirectional. Like circuit diagrams, pneumatics, engines, etc. You aren't writing for loops.

Also it is actually visually decent unlike LabVIEW which looks like it was drawn by someone who discovered MS Paint EGA edition.


Simulink is based on the block diagram notation used in control theory for decades earlier - before personal computers and workstations. The notation is rigorous enough you can pretty much pick up a book like the old Electro-Craft motor handbook (DC Motors Speed Controls Servo Systems), enter the diagrams into Simulink, and run them. With analogous allowances to how you would enter an old schematic into a SPICE simulator.

LabView was significantly more sui generis and originated on Macintosh about a decade earlier. I don't hate it but it really predates a lot of more recent user experience conventions.


This is exactly why a visual representation of code can be useful for analyzing certain things, but will rarely be the best (or even preferred) way to write code.

I think a happy medium would be an environment where you could easily switch between "code" and "visual" view, and maybe even make changes within each, but I suspect developers will stick with "code" view most of the time.

Also, from the article: > Developers say they want "visual programming"

I certainly don't. What I do want is an IDE which has a better view into my entire project, including all the files, images, DB, etc., so it can make much better informed suggestions. Kind of like JetBrains on steroids, but with better built-in error checking and autocomplete suggestions. I want the ability to move a chunk of code somewhere else, and have the IDE warn me (or even fix the problem) when the code I move now references out-of-scope variables. In short, I want the IDE to handle most of the grunt work, so I can concentrate on the bigger picture.


Most industrial automation programming happens in an environment similar to LabView, if not LabView itself. DeltaV, Siemens, Allen-Bradley, etc. Most industrial facilities are absolutely full of them with text-based code being likely a small minority for anything higher level than the firmware of individual PLCs and such.


A lot of these environments inherit a visual presentation style (ladder logic) that comes from the pre-computer era, and that works extremely well for electrical schematics when conveying asynchronous conditional behaviors to anyone, even people without much of a math background. There's a lot of more advanced functions these days that you write in plain C code in a hierarchical block, mostly for things like motor control.


I like function block on Schneider platform for Process control with more analog values than Boolean. It visualizes the inputs, control loop, and output nicely.

Numeric values in ladder feels a bit kludgey


These are standardized IEC 61131-3 languages https://en.wikipedia.org/wiki/IEC_61131-3

Ladder, SFC and FBD are all graphical languages used to program PLC's. Ladder is directly based on electrical ladder schematics and common in the USA. The idea was electricians and plant technicians who understood ladder schematics could now program and troubleshoot industrial computers. SFC and FBD were more common in Europe but nowadays you mostly see Structured Text, a Pascal dialect (usually with bolted on vendor OOP lunacy.)

I will admit that for some programs, Ladder is fantastic. Of course ladder can be turned into horrid spaghetti if the programmer doesn't split up the logic properly


I think the whole flow concept is really only good for media pipelines and such.

In mathematics, everything exists at once just like real life.

In most programming languages, things happen in explicit discrete steps which makes things a lot easier, and most node based systems don't have that property.

I greatly prefer block based programming where you're dragging rules and command blocks that work like traditional programming, but with higher level functions, ease of use on mobile, and no need to memorize all the API call names just for a one off tasks.


What would be useful is a data flow representation of the call stack of a piece of code. Generated from source, and then brought back from the GUI into source.


I still have to find somebody who worked with LabView that does not hate it.

It is a total abomination.


I don't hate it, I feel it's pretty good for talking to hardware, (understanding) multi-threading, agent oriented programming, message cues, etc.

It's also fairly good for making money: the oil and gass industry seems to like using it (note: n = 1, I only did one oil n gas project with it).


How does version control work with Labview?

Also, since you;ve done only one project with it, how hard was it to pick it up and learn?


> How does version control work with Labview?

Labview does have diff and merge tools. It feels kind of clunky in practice, kind of like diffing/merging MS Office files. In my experience people think of versions of LabView code as immutable snapshots along a linear timeline and don't really expect to have merge commits. Code versions may as well be stored as separate folders with revision numbers. The mindset is more hardware-centric; e.g., when rewiring a physical data acquisition system, reverting a change just means doing the work over again differently. So LabView's deficiencies in version control don't stand out as much as they would in pure software development.

https://www.ni.com/docs/en-US/bundle/labview/page/comparing-...


I used Labview as part of a course in my degree (EE), so I already knew it.

If you know other languages I would say it's very easy to pick up. Probably the easiest out of any language out there. Instead of heaving to guess/learn the syntax, you just pick functionality from icons/lists and drag and drop.


As someone who used to use (and hate) LabVIEW, a lot of my hatred towards it was directed at the truly abysmal IDE. The actual language itself has a lot of neat features, especially for data visualization and highly parallel tasks.


Anyone who mentions visual scripting without mentioning the game industry just hasn't done enough research at all. Its actually a really elegant way to handle transforming data.

Look up Unreal blueprints, shader graphs, procedural model generation in blender or Houdini. Visual programming is already here and quite popular.


[post author] I am familiar with those and have used a couple. There are similar examples in music, where visual programming dominates.

The implied audience of this post (not clear) is people writing business applications, web dev, etc. The examples are picked to reflect what could be useful to those developers. In other words, all the examples you mentioned are great but they are not how a "software engineer in a software company" does their job.


> In other words, all the examples you mentioned are great but they are not how a "software engineer in a software company" does their job.

creating blueprints or max/msp programs is definitely software engineering, it requires you to think about correct abstractions, computations, data flow and storage, etc.

also, there's currently 398 Rust jobs worldwide advertised on linkedin, vs. 1473 for "unreal blueprints"


game developers are definitely software engineers in software companies.


My experience is that the software engineers at game companies generally hate the visual programming tools. They want to work with code. It's the game designers who (sometimes) like using visual tools.


Some are bad but Blueprints is great. Main issue is they don't always play nice with text based coding tools.


I spent about a year working with blueprints a while back and I found some things just really annoying. like making the execution line go backwards into the a previous block. if you do it straight there, it wont let you, if you use a single reroute note you get an ugly point, so you have to use two reroute nodes to get it to work properly and nicely. Also they don't have all the nodes you need so you end up having to write some new ones anyway


And AI - which kind of changed the game in the recent years. A "blueprints copilot" akin to Github Copilot will be very difficult to create because there's no "blueprints text" to train an AI on. Nowadays in my hobby pet projects I find it easier to write C++ with copilot than Blueprints.


There's a JSON format of the blueprints that you can see when you copy/paste. Its just a bit ambiguous than the usual binary format. Its not an impossible problem at all.


Not an impossible problem only in theory. It's currently practically impossible and will take at least a year to solve if anybody starts to work on this at all.

Since my current project does involve wrangling AI to do stuff - forcing it to output a consistent, complete, large JSON with an exact specific format is very difficult and takes a lot of time (you won't be able to draw Blueprints line by line to show to the user that AI is processing). Definitely no autocomplete-like experiences maybe ever.

For example, look at the text representation of these 6 (!) nodes:

https://blueprintue.com/blueprint/yl8hd3-8/

It's enormous.

And the second even bigger problem: On forums and basically everywhere all users share screenshots with descriptions. There's not enough training data for anything meaningful.

I tried to force copilot/gpt to output even a small sample of copy-pastable blueprint and it just can't.


Of course, but you know what they're saying.


Developer here that started doing Analysis work for complex systems that require some thorough Bsuiness Modelling. Take a look at

- BPMN https://nl.m.wikipedia.org/wiki/Business_Process_Model_and_N...

- MDA https://en.m.wikipedia.org/wiki/Model-driven_architecture

- MBSE https://en.m.wikipedia.org/wiki/Model-based_systems_engineer...

The tooling on these for Analysts I feel is not on par with tools for Developers. Often outdated and enterprise-only both in price and complexity of use.

- https://en.m.wikipedia.org/wiki/Model-driven_engineering#Too...

Any more insights welcome. At first I expected to find a complete “draw a model, generate api + sourcecode with validation functions” ecosystem of interoperable ecosystems. Apparently the market is not there yet.


I've been looking at using Godot for desktop apps, and seeing how the game world thinks about memory has given me a ton of ideas.


As someone who works for games, I think the biggest problem of node-based systems is... they're all different (in terms of UI/UX).

Unreal blueprints, Substance Designer, Houdini, Blender's geometry node, Unity shader nodes... they all look different and act differently. Different shortcuts and gestures. Different window/panel management.

Different programming languages have different syntax rules and libraries, of course. But at least they're all manipulated with one single interface, which is your editor. If you use vim binding, you don't need to worry about "what pressing K does". It moves the cursor down for all the languages.

People who spent X hours customizing their vim/emacs will benefit from them no matter what language they use next. I spent a lot of time customizing my Houdini keybindings and scripts, and this effort will be thrown out the window if I later switch to Blender.


You know, this is actually really insightful. A standard graph format that all these tools could import/export to could lead to a lot more reusable tooling.

The incentives aren't quite there at the moment but maybe someone like Microsoft or Jetbrains takes a stab at it.


You think Microsoft is going to make a format that's cross-platform?

I guess that leaves jet brains


You mean like for example the Language Server Protocol?


One could even go further and expand this to the players themselves, as there are certain games that might be viewed as visual programming tools. Factorio is a great example, as, conceptually speaking, there isn't much of a difference between a player optimising their resource flow in the game vs a developer managing the data flow in a State Machine.


I've been using ComfyUI recently to manage complex image diffusion workflows, and I had no idea it was inherited from much older shader editors and vfx. It's a shame we can end up using a tool for years without knowing anything about its predecessors.


One major difference I’ve seen in shader graph type tools is that they are stateless, or almost stateless. The output of the shader graph is a function of time and some fixed parameters, and there are rarely feedback loops. When there are feedback loops in shader graphs, they are represented explicitly via nodes like TouchDesigner’s feedback TOP.

This way of expressing computations lends itself well for shader programming where the concurrency of the GPU discourages manipulation of arbitrary state.

In contrast, business logic programmed for the CPU is generally more stateful and full of implicit feedback loops. IMO these types of computations are not expressed well using node based visual programming tools because the state manipulation is more complex.


I used blueprint's predecessor 'kismet' quite extensively. I absolutely hated it. Give me unrealscript any day. Blueprint is popular because that's all you have. They removed unrealscript. To do anything even slightly complex you have to use C++ now.


I wonder the sweet point between BP and C++. One of my friends is making a commercial indie game in UE and he is doing everything in BP because he is an artist, so C++ is particularly daunting for him. He did complain about the spaghettis hell he eventually came upon, without anyway to solve it, but from the number of wishlistings (targeting 10K), I'd say it is probably going to be a successful game, judging by first indie game standard.


TBH I think Blueprints gets used because it is forced upon the UE developers.


Blueprints gets used because the only alternative in UE, writing decade-old paradigm C++ code with 2 decades old macro DSL on top of it, is a lot worse.

Unity has had multiple visual programming packages and people don't really care. Writing 2017 era C# paradigm code with an API resembling 2004 Macromedia Flash is not nearly as bad.


> Unity has had multiple visual programming packages and people don't really care.

People cared enough for Unity to buy one and make it official but Unity doesn't care so it mostly just rots.


Its important to note that some successful Unity games were still made with visual scripting tools e.g. Hollow Knight used Playmaker.


> One reason is because we think that other, more inexperienced, programmers might have an easier time with visual programming. If only code wasn't as scary! If only it was visual! Excel Formula is the most popular programming language by a few orders of magnitude and it can look like this:

> =INDEX(A1:A4,SMALL(IF(Active[A1:A4]=E$1,ROW(A1:A4)-1),ROW(1:1)),2)

Ahem. Excel is one of the most visual programming environment out there. Everything is laid out on giant 2d grids you can zoom in and out. You can paint arrows that give you the whole dependency tree. You can select, copy, paste, delete code with the mouse only. You can color things to help you categorize which cell does what. You can create user inputs, charts and pivot grids with clicks.


As a programmer who had used Excel for years, seeing my accountant start typing a formula, change sheets, select some cells, go back, repeat, was a learning process. I didn't even know you could do that, and also, I hated it. But it worked very well for him.

I've more recently been exposed to a few spreadsheets that are used to calculate quotes in major insurance businesses when I was asked to create an online process instead, replicating the questions and formula.

They're things of horrifying eldritch beauty. I seem to always find at least one error, and no one I'm allowed to talk to ever really knows how they work since they're built up over years. Those dependency arrows are a life saver.


> I seem to always find at least one error

Every time I see so spreadsheet where the dependency is hard to track, I've found enough errors that the results were completely bogus.

Also every time, nobody cared.


In my case at least, they probably accounted for it somewhere by adjusting rates elsewhere so it works out. So it's a bit risky to just change, similar to code that is known to be wrong but too hard to change since "everything works".


Oh sweet summer child. You will probably never experience running into the 10 million cell limit on a google sheet with more than a hundred sheets and waiting a quarter of an hour for your spreadsheet to update.


I have had to debug insane Excel sheets, which were used to generate C code, based on the geometric properties of an object.

Excel works very well for describing many, simple relationships. It totally falls apart the moment you have complex relationships, as they become mentally untraceable. Functions allow you to abstract away functionality, referencing cells does not.

I am pretty certain that Excel is one of the most misused tools and suffers the most from "I use it because I know it".


I had thought I had seen everything but kudos - Excel to generate C code is some sort of local extremum for software hacks.


Excel could do this so much better though (and I think excel is the best candidate for visual scripting overhaul). The cell could have two parts; top parts is the function signature (other cells could reference by signature, or by cell number), bottom part is the code. Each cell is a function.

People put huge unreadable basic functions in that tiny box. It's such an obvious pain point, surprised it's never been addressed. Replace vba with c#, have a visual line linking cells to other cell references, bam million dollar product.


A basic problem I have, looking at an Excel spreadsheet, is I don't know which cells are calculated by a formula, which are constants.

Maybe it would be easier if the spreadsheet was divided into an upper part with only constant-cells and a lower part with only calculated values, would that help me?


> A basic problem I have, looking at an Excel spreadsheet, is I don't know which cells are calculated by a formula, which are constants.

Use Ctrl-` (show formulas).


Thanks for the tip I will use it next time I open an Excel spreadsheet.

I'm also thinking in terms of perhaps having a different visual style for cells with formulas, when the spreadsheet is presented on paper etc.


Most people don't seem to realise that you can put line breaks and leading spaces for indentation into formulae. Also while by default the edit line at the top of the window only shows one line, you can drag the divider down to show multiple lines.


> You can paint arrows that give you the whole dependency tree.

Sorry, is that a manual process, or is there a option in Excel to show multi-ancestor dependencies?

I'm aware that you can double click to see a single cells inputs, but I want to go deeper.



Stil impossible to know what a Excel sheet does only by looking at it. The 2d grid obfucates the relationships between data.

Power BI does (almost) everything Excel does but better.


You are right. You can see the data first, charts, even dependencies. And yet nobody is drawing `IF(ACTIVE[A1:A4]=E$1)`


I think people get too hung up on the visuals. There was a (failed) attempt to create something called intentional programming by Charles Simonyi. That happened in the middle of the model driven architecture craziness about 20 years ago.

In short, his ideas was to build a language where higher level primitives are created by doing transformations on lower level syntax trees. All the way down to assembly code. The idea would be that you would define languages in terms of how they manipulate existing syntax trees. Kind of a neat concept. And well suited to visual programming as well.

Weather you build that syntax tree by typing code in an editor or by manipulating things in a visual tool is beside the point. It all boils down to syntax trees.

Of course that never happened and MDA also fizzled out along with all the UML meta programming stuff. Meta programming itself is of course an old idea (e.g. Lisp) and still lives on in things like Ruby and a few other things.

But more useful in modern times is how refactoring IDEs work: they build syntax trees of your code and then transform them, hopefully without making the code invalid. Like a compiler, an IDE needs an internal representation of your code as a syntax tree in order to do these things. You only get so far with regular expressions and trying to rename things. But lately, compiler builders are catching onto the notion that good tools and good compilers need to share some logic. That too is an old idea (Smalltalk and IBM's Visual Age). But it's being re-discoverd in e.g. the Rust community and of course Kotlin is trying to get better as well (being developed by Jetbrains and all).

But beyond that, the idea seems a bit stuck. Too bad because I like the notion of programs being manipulated by programs. Which is what refactoring does. And which is what AI also needs to learn to do to become truly useful for programming.


Related to this is compilers printing control flow graphs on request, though usually you can't edit the graph and convert that back to the internal structure.


Most of this isn't visual "programming" just good explanatory diagrams. I think it gets to a core issue which is a dichotomy between:

- trying to understand existing programs - for which visuals are wanted by most but they usually need concious input to be their best

- programming (creating new code) itself - where the efficiency of the keyboard (with its 1d input that goes straight to spaghetti code) has never been replaced by visual (mouse based?) methods other than for very simple (click and connect) type models


You are right. The diagrams are used as explanations not as the source of the program. But wouldn't it be neat if when you sketch out the state transition in a diagram (how I think about the state transitions), _that diagram_ was the source of truth for the program?

That is the implied point: let's go to places where we already draw diagrams and check if we can elevate them into the program


This can be really tricky to do. I reached the limit of my brain's working capacity designing a priority inheritance system, and sketched the state machine out in a dot file, visualized with graphviz - this worked really well for reasoning through the correctness of the algorithm and explaining it to others. I tried to structure the implementation code to match it and I was able to get pretty close; but the actual states were a bunch of bit-packing and duplicated control flow to get optimal assembly output for the hottest paths. Each one of those changes was easy to reason about as an isolated correct transformation of the original structure in code, but would have been a mess visually.


That sounds super interesting!

Did I understand correctly that the additional complexity came because you needed to emit optimal assembly? Or was implementing the logic from the state machine complicated enough?


Designing the state machine was hard. The implementation of that state machine was not that bad, because I'd spent so much time thinking through the algorithm that I was able to implement it pretty quickly. The implementation difficulty was optimizing the uncontended case - I had to do things like duplicate code outside the main CAS loop to allow that to be inlined separately from the main body, structure functions so that the unlock path used the same or fewer stack bytes than the lock path, etc. Each of those code changes were straightforward but if I had faithfully copied all those the little tweaks into the state machine diagram, it would be so obfuscated that it'd hide any bugs in the actual core logic.

So I decided that the diagram was most useful for someone looking to understand the algorithm in the abstract, and only once they had been convinced of its correctness should they proceed to review the implementation code. The code was a terrible way to understand the algorithm, and the visualization was a terrible way to understand the implementation.


From what I've seen when code is generated from formal specs it ends up being inflexible. However, do you think it would be valuable to be able to verify an implementation based on a formal spec?


People do that, and find very tricky bugs. One person did it by line-by-line translating C code into TLA+, another by representing the state machine in Coq and checking for predicates validating it in the source. But I don't think a visual representation of the state machine would have diagnosed the bugs the formal checkers did.

https://probablydance.com/2020/10/31/using-tla-in-the-real-w...

https://probablydance.com/2022/09/17/finding-the-second-bug-...

https://archive.is/sEXqu


I just realized my previous comment left out what I was trying to say—my bad! I think what I was trying to ask was: would it be possible to generate a formal specification from a graphical representation, and then use that specification to verify the source?

Also thank you for those links! I'll definitely give them a read.


I'm far from an expert in formal verification, I probably should be doing more of it than I do. From my two links, the way I've seen formal verification work is to either translate the code line-by-line in an automated or manual way into a formal language, then check for possible orderings that violate properties you care about; or define a formal model of the state machine, and insert validation of all the transitions in the code.

If you were going to do formal verification from the graphical representation, it would be on the algorithm; namely does it always converge, does it ever deadlock, does it ever fail mutual exclusion. If the goal is for a computer to analyze it, it can be precisely as complex as the source code, so yes. But at that point it's not useful visually for a human to read.


Ah, I see: by the time you've written a formal specification, the visualization becomes redundant.


That last point is super interesting: these diagrams never tell you much about the implementation or how it would perform.


Virtually all diagrams are representations of declarations that are the source of truth. Visual editors just help you edit that, but rarely get close to validating the underlying structure.

For things like programming languages and markdown, users switch between the modes. For something like SVG, users rarely learn or solve problems at the declaration level.

The real questions come with declaration re-usability and comparison. Two pdf's can look exactly the same, but be vastly different declarations, which makes comparison and component re-use essentially impossible.

It turns out, much of the benefit of visual editors is built on the underlying declaration model where it supports user edit/inspection goals.

So I think the point is not to have the visual be the source of truth, but to have more visualization and visual editors for the sources we have.

There are/were excellent visual editors for Java and Apple GUI's that supported round-tripping (mode-dependent source of truth). But we seem to have abandoned them not just because they're old, but because of the extra scaffolding required to do the round-tripping.

So my take-away has been that any visualization must be source-first and source-mainly - with no or minimal extra metadata/scaffolding, as markdown is now. That would mean the implied point is that we should visualize (or otherwise abstract to understand) the source we have, instead of using diagrams as the source of truth.


You might be interested in:

https://schematix.com/video/depmap

I'm the founder. It's a tool for interacting with deployment diagrams like you mentioned in your article.

We have customers who also model state machines and generate code from the diagrams.


> Schematix provides diagrams as a dynamic resource using its API. They aren't images you export, they don't end up in My Documents. This isn't Corel Draw. In Schematix, you specify part of your model using a graph expression, and the system automatically generates a diagram of the objects and relations that match. As your Schematix model changes, the results of the graph expression may change, and thus the visual diagram will also change. But the system doesn't need you to point and click for it. Once you've told it what you want, you're done.

What an interesting tool! It's rare to see robust data models, flexible UX abstractions for dev + ops, lightweight process notations, programmatic inventory, live API dashboards and a multi-browser web client in one product.

Do you have commercial competitors? If not, it might be worth doing a blog post and/or Show HN on OSS tooling (e.g Netbox inventory, netflow analysis of service dependencies) which offer a subset of Schematix, to help potential customers understand what you've accomplished.

Operational risk management consultants in the finance sector could benefit from Schematix, https://www.mckinsey.com/capabilities/risk-and-resilience/ou.... Lots of complexity and data for neutral visualization tooling.


Schematix is somewhat unique. Direct competitors? -- not exactly, but IT asset managers, DCIM, BC/DR tools, and CMDBs are all competitors to some degree.

Some of our best users are professional consultants who use us for a project which often introduces us to a new customer.

A Show HN would certainly be in order. Thanks for the thoughts!


Do your blog posts have individual URLs? I would like to share a specific post, rather than the cumulative log.


Yes, in order to be visual coding (or better yet specification) it needs to be executable in it's native form, or maybe a very direct translation.

The concept of an executable specification first came to my attention in IEC 61499 the standard for Distributed Automation. First published in 2005 it was way, way ahead of it's time, so far ahead it is still gaining traction today.

Shout out to anyone reading who was involved in the creation of IEC 61499 in 2005, it was a stroke of genius, and for it's time, orders of magnitude more so. It is also worth a look just to prompt thinking for any one involved in distributed systems of any kind.

Initially I thought there was no way you could have such a thing as an executable specification, but then, over many years I evolved to a place where I could generically create an arbitrary executable specification for state based behavior (see my other post this topic).

I believe I have found the best achievable practice to allow defining behaviors for mission/safety critical functionality, while avoiding implicit state.


I forget its name, but there was an IBM graphical tool , with which you create UML diagrams and it in turn created code (Java IIRRC).

The intermediate representation was in sexp !


Replying to my own comment.

The tool was called "Rational Rose" !


Programming “via” Visualization — doesn’t scale. Great for demos. Good in limited places.

Visualizations “of” a Program — quite useful. Note there lots of different ways to visualize the same program to emphasise / omit different details. The map is not the territory, all models are wrong etc.


It works and even scales up in some cases.

For example having models of capacitor and resistor you can put them together in schematic. Which in turn can be a part of the bigger design. Then test it in simulator. That's how Simplorer works. Alternatively you can write the code in VHDL or Modelica. But visual is quicker, easier, and more reliable.

Obviously it works well for UI, was used for decades now.

As for the rest,... there are visual programmers for robots, mostly for kids.


Schematics don't scale well at all - net labels and multiple sheets demonstrate this.

HDLs rule for gate and transistor level circuit design. I don't know what major PCB houses do but I'd be horrified to discover that 16-layer boards still have a visually built schematic producing their netlist: just finding the right pad on 256BGA components would be awful, let alone finding what else is connected to that net.


> Schematics don't scale well at all

Schematics aren't supposed to scale. They're a lossy representation of a subcircuit without caring about the intricate details like footprints or electrical/electro-mechanic constraints.

PCB designers largely don't use HDLs because they don't really solve their problems. Splitting a BGA component into subcircuits that have easily legible schematics is not hard, but it's also not what they care about. That shit is easy, making sure the blocks are all connected correctly.

Verifying the electrical constraints of the 256 pad component is much harder and not represented in the schematic at all. They need to see the traces and footprint exactly.

As an example, the schematic doesn't tell you if a naive designer put the silkscreen label and orientation marker underneath the component which will cause manufacturing defects like tombstoning in jellybean parts.


Why do you think it doesn't scale?


It gets very messy with scale and keeping it not messy as it's scaling up is hard.


I think the difficulty here is addressing: who is your target audience? Depending on that answer, you have different existing relatively succesful visual programming languages. For example, game designers have managed to make good use of Unreals' blueprints to great effect. Hobbists use Comfy UIs node language to wire up generative AI components to great effect. As far as generic computing goes, Scratch has managed to teach a lot of programming principles to people looking to learn. The problem comes in when you try and target a generic systems programmer: the target is too abstract to be able to create an effective visual language. In this article, they try and solve this issue by choosing specific subproblems which a visual representation is helpful: codebase visualization, computer network topology, memory layouts, etc...but none of them are programming languages


[post author] I agree. On many domains you can find a great mapping between some visual representation and how the developer (beginner or not) wants to think about the problem.

I personally don't see any one pictorial representation that maps to a general programming language. But if someone does find one, in the large and in the small, that'd be great!


Not even all textual languages map well to every problem space.

Blueprints are a good callout(and Kismet before them). Many PLCs[1] are heavily visual language based with Ladder, FBD or other variants. I wouldn't be surprised if they were the most widely used application of visual programming languages.

[1] https://en.wikipedia.org/wiki/Programmable_logic_controller


> I personally don't see any one pictorial representation that maps to a general programming language.

I agree. What I've had in mind for a while now is very different from this.

What I envision is "text" in the sense that it's not a diagram, but more advanced textual representation. Over hundreds of years mathematicians have evolved a concise, unambiguous, symbolic notation for formulae, yet programmers are still using tools that are backward compatible with dot matrix terminals from the 60's: simple characters used to write lines in files.

Blocks, conditions, iteration, exits (return, exception, etc.,) pipelines, assignment, type and other common concepts could be represented symbolically. The interface would still be text-like, but the textual representation would be similar to mathematical notation, where the basic constructs of code are depicted as common, well understood, dynamically drawn symbols that programmers deeply inculcate.

Key properties include extreme concision and clarity of the "instruction pointer." Concision is crucial to reduce the cognitive cost of large amounts of logic. The latter is a thing that is entirely obscured in most visual programming schemes and also absent from conventional mathematical notation: the location of the current instruction is absolutely crucial to understanding logic.

I wish I had more time to elaborate what I have in mind, much less actually work on it.


Scratch is the only type of visual programming I've enjoyed using. It's easy to read if you're an experienced programmer because it has the same structure as regular code, and it's easy to read for beginners because everything is broken into large blocks that have what they do written right on them. The way code is structured in most programming languages is actually very logical and intuitive, and it's the most successful system we have so far. The problem for beginners is that they can't figure out if they enjoy programming until they've learned the syntax, which can be very discouraging for some people. I've seen Scratch bridge that gap for people a couple of times, and I think it's probably the best model when it comes to teaching people to code.

I think other types of models would only be useful for situations where writing code isn't the most intuitive way to make something. From my limited experience, a visual system for making shaders is a pretty good idea, because ideally, you don't want to have many conditional branches or loops, but you might have a lot of expressions that would look ugly in regular code.


I'm going to throw a vote in here for Grasshopper, the visual programming language in Rhino3d as doing it the right way. It is WIDELY used in architectural education and practice alike.

Unfortunately, most visuals you'll get of the populated canvas online are crap. And for those of us who make extremely clean readable programs it's kind of a superpower and we tend to be careful with how widely we spread them. But once you see a good one you get the value immediately.

Here's a good simple program I made, as a sample. [0]

Also, I want to give a shout-out to the Future of Coding community in this. The Whole Code Catalog [1] and Ivan Reese's Visual Programming Codex [2] are great resources in the area.

I also have to mention, despite the awful name, Flowgorithm is an EXCELLENT tool for teaching the fundamentals of procedural thinking. [3] One neat thing is you can switch between the flow chart view and the script code view in something like 35 different languages natively (or make your own plugin to convert it to your language of choice!)

p.s. If you are used to regular coding, Grasshopper will drive you absolutely freaking bonkers at first, but once you square that it is looping but you have to let the whole program complete before seeing the result, you'll get used to it.

[0] https://global.discourse-cdn.com/mcneel/uploads/default/orig...

[1] https://futureofcoding.org/catalog/

[2] https://github.com/ivanreese/visual-programming-codex

[3] http://flowgorithm.org/


Agreed, Rhino/Grasshopper is an amazing tool, especially once you start adding in C# components. I’ve been using it off and on for several years on custom consumer product projects. It’s an under utilized workflow in many fields requiring 3D modeling imo. I just finished a custom VR gasket generator for the Quest 3 that uses face scans from iPhone as the input and the project wouldn’t have been possible without Grasshopper: https://youtu.be/kLMEWerJu0U


That's rad - thanks for sharing! I'll try to watch the whole thing when I'm not on deadline.

My jewelry work [0] is almost all in Grasshopper, as I've built up such a workflow there over the past... 8 years? that I don't need custom tools for most of it.

But my research work is all about building custom tools in C#. In fact I just finally published my component library yesterday [1]. Frankly I should have released it years ago, but I finally just bit the bullet.

[0] https://Xover0.com [1] https://www.food4rhino.com/en/app/horta


Vaguely related: Rhino 3D has the best interface of any 3D modeling tool I've ever used and I'm sad it is not the norm. Is integration between command line and UI is absolutely amazing.

I remember when I first tried to SketchUp I was horrified at how atrocious the UI is compared to rhino 3D.


Yeah, not quite "visual programming,' but there is a similar argument to be made about a program's user interface and how its design suggests it should be used. At this point, that's probably a far better explored area than the same aspect of visual programming.

That said - Rhino is one of the exemplars in this area. I always tell my students - if you don't know what to do, just start typing. As you say the relationship of the graphical command processes and the CLI is stellar.

But - one big shout back to Grasshopper that NOTHING ELSE compares to - if you hold "ctl-alt" and click-hold on a component on the canvas, it opens up the library tab where that component can be found and puts a big arrow pointing to a big circle around it. It's one of the most shockingly useful commands in any program, ever. I've had rooms of students audibly gasp when shown that.


People have mentioned a bunch of successful visual programming applications, but one that I've been thinking a lot about lately is Figma.

Figma has managed to bridge the gap between designers, UXR, and engineers in ways that I've never seen done before. I know teams that are incredibly passionate about Figma and use it for as much as they can (which is clearly a reflection of Figma themselves being passionate about delivering a great product) but what impressed me was how much they focus on removing friction from the process of shipping a working application starting from a UI mockup.

I think Figma holds a lot of lessons for anyone serious about both visual programming and cross-functional collaboration in organizations.


I simply have to recommend Glamorous Toolkit to anyone interested in visual programming: https://gtoolkit.com

It focuses on the kind of visual programming the article argues for: Class layout, code architecture, semantics. It's one of the best implementations I have seen. The authors are proponents of "moldable development", which actively encourages building tools and visualizations like the ones in the article.


The "issue" with it is that it is tied to a Smalltalk, so it's hard to imagine it being more or less practical.


It's not tied to Smalltalk, at least not completely: the standard distribution comes with a JS and Java parser and you can use those to create Smalltalk model of their ASTs, making it look like they're just Smalltalk objects too.


No one ever bothered to open up this site on mobile.

My personal pet peeve with websites of desktop-focused applications.


The issue with every one I’ve used is that it hides all the parameters away in context aware dialog boxes. Someone can’t come along and search for something, they need to click every element to via the dialog for that element to hunt for what they are looking for. I found every time the lead dev on a project changed, it was easier to re-write the whole thing than to try and figure out what the previous dev did. There was no such thing as a quick change for anyone other than the person who wrote it, and wrote it recently. Don’t touch the code for a year and it might as well get another re-write.


Yes, definitely this. I have worked for a couple years on webMethods, where programs can ONLY be created by "drawing/composing" sort of flowcharts (see https://stackoverflow.com/q/24126185/54504 ) and the main problem was always trying to search for stuff inside the "Codebase". And... another benefit of purely text-based code is that you can always run a diff-like utility and quickly zoom in on what has been changed.


This is definitely true for visual systems. That said, I've also found it to be true for text-based codebases.


This article seems focused on "how do we help programmers via visual programming", and it presents that case very well, in the form of various important and useful ways to use visual presentation to help understand code.

There's a different problem, of helping non-programmers glue things together without writing code. I've seen many of those systems fail, too, for different reasons.

Some of them fail because they try to do too much: they make every possible operation representable visually, and the result makes even non-programmers think that writing code would be easier. The system shown in the first diagram in the article is a great example of that.

Conversely, some of them fail because they try to do too little: they're not capable enough to do most of the things people want them to do, and they're not extensible, so once you hit a wall you can go no further. For instance, the original Lego Mindstorms graphical environment had very limited capabilities and no way to extend it; it was designed for kids who wanted to build and do extremely rudimentary programming, and if you wanted to do anything even mildly complex in programming, you ended up doing more work to work around its limitations.

I would propose that there are a few key properties desirable for visual programming mechanisms, as well as other kinds of very-high-level programming mechanisms, such as DSLs:

1) Present a simplified view of the world that focuses on common needs rather than every possible need. Not every program has to be writable using purely the visual/high-level mechanism; see (3).

2) Be translatable to some underlying programming model, but not necessarily universally translatable back (because of (1)).

3) Provide extension mechanisms where you can create a "block" or equivalent from some lines of code in the underlying model and still glue it into the visual model. The combination of (2) and (3) creates a smooth on-ramp for users to go from using the simplified model to creating and extending the model, or working in the underlying system directly.

One example of a high-level model that fits this: the shell command-line and shell scripts. It's generally higher-level than writing the underlying code that implements the individual commands, it's not intended to be universal, and you can always create new blocks for use in it. That's a model that has been wildly successful.


Shameless plug, but this is what we’re trying to do at Magic Loops[0].

We joke it’s the all-code no-code platform.

Users build simple automations (think scrapers, notifications, API endpoints) using natural language.

We break their requests into smaller tasks that are then mapped to either existing code (“Blocks”) or new code (written by AI).

Each Block then acts as a UNIX-like program, where it only concerns itself with the input/output of its operation.

We’ve found that even non-programmers can build useful automations (often ChatGPT-based like baby name recommenders), and programmers love the speed of getting something up quickly.

[0] https://magicloops.dev


Mindstorms is an example of what did not work. I want to provide an example of what does. BBC microbits. It has a visual programming interface that is translatable to python or JavaScript .


Most times in my career that I've seen people talking about visual programming, it's not about the developers - it's about lowering the bar so that (cheaper) non-developers can participate.

A Business Analyst may or may not have a coding background, but their specifications can be quite technical and logical and hopefully they understand the details. The assumption is that if we create our own Sufficiently Advanced Online Rule Engine they can just set it all up without involving the more expensive programmers.

This is discussed a bit in the first paragraph, but I just wanted to reiterate that most systems I had to deal with like this were talked about in terms of supplying business logic, rules, and control flow configuration to a pre-existing system or harness that executes that configuration. The "real" programmers work on that system, adding features, and code blocks for anything outside the specification, while the other staff setup the business logic.

It works to some degree. I think things like Zapier can be quite good for this crowd, and a lot of mailing list providers have visual workflow tools that let non-programmers do a lot. A DSL like Excel formulas would be in this group too, since it operates inside an existing application, except that it's non-visual. Some document publishing tools like Exstream (I worked with it pre-HP, so years ago) did a lot in this space too.

I did read and appreciate the whole article, I just noticed this part for a reason - I'm working on a visual question builder again right now for a client who wants to edit their own customer application form on their custom coded website, instead of involving costly programmers. It always ended poorly in the past at my previous company, but maybe it'll be different this time.


>it's about lowering the bar so that (cheaper) non-developers can participate.

I think that is a terrible approach to anything. Programming isn't that hard and without a doubt anyone who can do business analysis is mentally capable of writing Python or whatever other scripting language.

Instead of teaching people something universal, which they can use everywhere and which they can expand their knowledge of as needed, you are teaching them a deeply flawed process, which is highly specific, highly limited and something which the developer would never use themselves.

Having a business analyst who is able to implement tasks in a standard programming language is immensely more valuable than someone who knows some graphic DSL you developed for your business. Both the interest of the learner and the corporation are in teaching real programming skills.

Even the approach of creating something so "non-programmers" can do programming to is completely condescending and if I were in that position I would refuse to really engage on that basis alone.


> you are teaching them a deeply flawed process, which is highly specific, highly limited and something which the developer would never use themselves.

That kind of lock-in can be a feature from the employer's perspective. I did actual coding for years in an environment where what I learned was not very widely applicable at all, for similar reasons. I'm now happily in recovery :) But it makes it harder to leave when you feel like you lag behind where you should be in your career.

I don't think tools like Zapier are condescending. I can and have written code to connect APIs, but Zapier made some stuff way easier, and it lets people like my wife get the same stuff done with far less effort. She has no interest in learning programming. There will be stuff the tool can't do, so then the programmers can step in.

And in my prior job, many people became BAs from a coding background specifically to get out of writing code. They can do it - they don't want to. They're happier in MS Office or similar tools.


>That kind of lock-in can be a feature from the employer's perspective

And it can be a huge problem, as he has to maintain a complex visual DSL and teach it to every new employee. Locking employees in seems like a very easy way to make people miserable and unproductive.

An employer wants employers who are long term productive, giving them good tools and the ability to learn new things allows them to not hate their jobs. And an employee who knows basic programming is always an asset.

>And in my prior job, many people became BAs from a coding background specifically to get out of writing code. They can do it - they don't want to. They're happier in MS Office or similar tools.

I completely understand that. But there are definitely problems that need to be solved with programming and having people with the ability to do so can only be an asset, even if they aren't a full time developers.

In general I think it is pretty hard sell to teach someone a skill with no other applications. This is different if that person only wants to achieve a certain thing, then transferability is irrelevant. But if you want someone to learn something new, it requires for them to understand why they should learn. Programming isn't particularly hard, teaching someone a standard programming language and giving them the ability to use that in their jobs, instead of a specialized DSL is an enormous benefit.

If you came to me and told me you are going to teach me something which is totally different from what you yourself would do and a special way by which you have made something easy so that I can understand it, I would refuse. I guess that I might be projecting here, but I genuinely feel that many people would look at it the same way.


> it's about lowering the bar

I think that might be right.

I remember the first time playing with "visual" programming (kind of). It was visual basic, probably the first version.

It lowered the bar for me.

I quickly learned how to create a UI element, and connect things. A button could be connected to an action.

So then I was confronted with event-driven programming, and that exposure was basically what was taught to me.

And then the beauty of creating a UI slowed as I exhausted the abstraction of visual basic and ended up with a lot of tedious logic.

I had a similar experience with xcode on macos. I could quickly create an app, but then the user interface I created was dragged down again. It seemed to me like the elegance of a mac user interface, required what seemed like a lot of tax forms to fill out to actually get from a visual app to a working app. I really wanted to ask the UI, what dummy stuff like the app name hasn't been filled out yet? What buttons aren't connected? how do I do the non-visual stuff visually, like dragging and dropping some connection on a routine? ugh.

In the end there's a beauty to plain source code, because it seems like text is the main and only abstraction. It's not mixed in with a lot of config stuff that only xcode can edit, and probably will break when xcode is upgraded.


This actually works if it's not a generic visual programming solution, but if it's a DSL. Don't give the business people pretty graphical loops, give them more abstract building blocks.

Unfortunately that means paying the professional programmers to build the DSL, so it doesn't reduce costs in the beginning.


I think part of the problem is that coding projects can get really big - like millions of lines of code big. Not making a huge mess of things at that scale is always going to be difficult, but the approach of text-based files with version control, where everyone can bring their favorite editors and tools, seems to work better than everything else we've tried so far.

Also, code being text means you can run other code on your own code to check, lint, refactor etc.

Visual programming - that almost always locks you into a particular visual editor - is unlikely to work at that scale, even with a really well thought out editor. Visual tools are great for visual tasks (such as image editing) or for things like making ER diagrams of your database schema, but I think that the visual approach is inherently limited when it comes to coding functionality. Even for making GUIs, there are tradeoffs involved.

I can see applications for helping non-programmers to put together comparatively simple systems, like the excel example mentioned. I don't think it will replace my day job any time soon.


It seems odd to me not to mention things like MaxMSP or PD in an article like this. Arguably Max is one of the most successful standalone visual programming languages (standalone in so far as it’s not attached to a game engine or similar - it exists only for its own existence).


Those two are both primarily for real time signals and music right? That is a great domain for wires, transforms, and pipelines.

Have you ever seen them used in a different context?


GNU Radio Companion is its RF/software-defined radio counterpart: https://wiki.gnuradio.org/index.php?title=Your_First_Flowgra...

Sometimes the flowgraph is too complex to be constructed using the visual editor though, for example gnss-sdr uses C++ to generate the GNU Radio flowgraph: https://gnss-sdr.org/docs/control-plane/


No, they are very tailored to that use case. They arent general languages - but they are still probably the best examples of successful visual programming languages.


Sequence diagrames (that seems not much different swimlane diagrams) are great, so much so that I created a tool that generates them from appropriately built TLA+ specs representing message exchange scenarios: https://github.com/eras/tlsd

However, while they are good for representing scenarios, they are not that good for specifying functionality. You can easily represent the one golden path in the system, but if you need to start representing errors or diverging paths, you probably end up needing multiple diagrams, and if you need multiple diagrams, then how do you know if you have enough diagrams to fully specify the functionality?

> The protocol is complicated enough for me to think that the diagrams are the source of truth of the protocol. In other words, I'd venture to say that if an implementation of the Double Rachet algorithm ever does something that doesn't match the diagrams, it is more likely it is the code that is wrong than vice-versa.

I would believe the latter statement, but I wouldn't say the first statement is that said in other words, so I don't believe this is the correct conclusion.

My conclusion would be that diagrams are great way to visualize the truth of the protocol, but they are not a good way to be the source of truth: they should be generated from a more versatile (and formal) source truth.


Statechart diagrams are even better than sequence diagrams, because they can encode quite a lot of behaviour visually.


State diagrams are basically visual code, aren't they?

And indeed they good for specifying, being the source of truth, but like code, they (afaik) don't really work for representing interactions with multiple actors (other than by sending/receiving messages), and they don't have a time component. But you could generate sequence diagrams from them, or at least verify them.

Xstate does have some functionality for interacting with the specified state machine, but I haven't played with it a lot. The idea of generating—or at least verifying—Xstate state machines with TLA+ has come across my mind, though.


Statecharts are highly useful to represent behaviour. Sequence diagrams do not capture it as much.

Timed behaviour, like timeouts can be represented in statecharts by having transitions on time based conditions. For example an event puts the system in a 'waiting' state, and in the waiting state there is a 30 second transition to a 'fail' state unless some other event happens which pulls the system out of the 'waiting' state.

Also it provides a good indication of what behaviour is valid and what not valid and what are don't cares.

External interactions of a system can be model as the state changes of a system.

They also have 'memory', special elements that remember which substate the system was in before it jumped out of the state last time.

I recommend David Harel's very interesting paper on modelling behaviour with state charts.


State diagram and sequence diagram complement each other, rather than competing.


I would love something to better visualize the software I'm working with. However...

1: It would need to have dynamic abstraction. Sometimes I need to see very top-level flow and organization, sometimes I need to dig down a bit. I'll essentially never need the lowest level like IF statements as the OP mentions, but I'll definitely need multiple different levels.

2: It would need to have some awareness of the intention of the software, its history, and it would need answers to some questions. It would need to be as competent as a developer experienced with writing the software to really give me any useful insights. It can't get confused because there are some macros sprinkled in, 3-4 different languages interacting, or parts of the software only used when certain environmental variables are set. It needs to handle basically all of the edge cases. If it gets "confused", it'll end up taking more time than it saves.

#1 is doable, but hard. #2 is basically magic. (For now)

You can claim machine learning can do it, but there's nothing close to that sophisticated and reliable in existence.


Scratch seems to be reasonably successful in teaching kids to code [1].

But a large visual blocks program is as incomprehensible, if not more, than a pure textual representation.

Whether text or visual, the challenge for IDE's is the ability to hide detail of large codebases in a way that still preserves the essential logic and allows modifying it. Folding/unfolding code blocks is the primary available tool but its only a primitive way to reduce visual clutter, not a new visual abstraction that can stand on its own.

[1] https://scratch.mit.edu/projects/editor/?tutorial=all


I think scratch with a little more structure and lots of keyboard shortcuts would work for a "real" language.

It's really just replacing indentation with blocks of color.



Virtually everything safety critical (cars, planes, biomedical..) uses Simulink which is not shown or mentioned by this post and it works fine for very large apps.


The kinds of visualisation discussed by the article remind me very strongly of Glamorous Toolkit [0], most recently posted to HN at [1]. It’s something I’ve never really felt a need for, mostly because the software I work on is mostly very small in terms of code size (for physics research etc.). The idea is certainly fascinating, however… there are a lot of possibilities for a codebase which can introspect itself.

[0] https://gtoolkit.com/

[1] https://news.ycombinator.com/item?id=33267518


Twenty years ago I was a researcher (Fraunhofer) on executable UML, especially on aspect oriented programming (AOP which was a thing back then, but never caught on). You could draw a boundary around some UML process flow, and attach an aspect to it. For example a security boundry, and then the code generated would automatically add a security check aspect for all flows going inside.

What we did find out, text is just better to read and understand. It's easier to refactor and much denser. We experimented with different levels to zoom in and zoom out for bigger programs but Visual programming does not scale (or didn't at least back then).


That was the premise of UML and the dream Rational was trying to sell with Rational Rose – that in the future, there would be no conventional programming languages, no software engineers, only architects, philosophers and visionaries wearing suits and ties, daydreaming and smoking pipes, who would be imbued with senses of self-importance and self-aggrandisement using Rational Rose and its visual language (UML) for system design and actually for every.single.thing., and Rational Rose would automatically generate the implementation (in an invisible intermediate conventional programming language as a byproduct). The idea was to obliterate the whole notion of programming as we know it today.

So the implementation in the intermediate programming language (C++) was not event meant to be readable to humans – by design. Rational Rose (the app), however, was too fat, too slow and (most importantly) buggy AF – to the point of the implementation it spat out nevery being able to work. And, UML did not meet the level of enthusiastic support Booch and Co wholeheartedly hoped for.

Whatever the reason was for Grady Booch's personal crusade against the programming and an attempt to replace programming with visual programming, it has failed and done so miserably. Today, the only living remnant and legacy is UML sequence diagrams, and even class diagrams are no longer seen in the wild.


You seem to have come to if from the wrong direction. The entire idea behind the Rational Process was that in the future, your architects would have to every 2 months or so come down from their conference rooms and talk to the *grasp* developers.

IBM had quite a hard time selling this idea. So they decided to push their marketing people outside of their target customers. That may be how they got to you.


The fundamental problem in visual programming is that it limits you to geometry (practically to 2D euclidian space). Most non-trivial programming problems are spaghetti by nature in such spaces.


That is not a problem, and for sure not a fundamental one. The textual representation is very limited, it's actually 1D with a line breaks helping us read it. 2D gives a lot more possibilities of organising code similar to how we draw diagrams on a whiteboard.


The problem with visual programming is it abandons the fundamental principle of language, whereby to connect two objects it is necessary only to speak their names, in favor of the principle of physicality, whereby to connect two objects it is necessary that they be in physical contact, ie. to be joined by a wire.


> only to speak their names

> in physical contact, ie. to be joined by a wire.

I don't really see how that is different, in any given language the name alone is not enough to refer to the object, in general case you have to import it. For me the process of name resolution and connecting by a wire is the same thing with different representations.


> to connect two objects it is necessary that they be in physical contact

I can imagine a way to connect an object to another by selecting the latter's name from a drop-down menu of defined objects. A visual equivalent of a function call.


Is that an inherent problem of the medium or the result of people trying too hard to completely change the paradigm?


Power of text or other symbols is that they aren't spatially bounded. That's why it works even in "1D".

There are probably some possible usability gains from adding dimensions. E.g. node-type "programming" in Blender is quite nice. But for general purpose progamming it's hard to see how we'd get rid of symbolic representation.


Specifically, textual programs use symbols to build a graph of references between computations, where the average visual language tries to use explicit lines between blocks. But the reference graphs of non-trivial programs are often decidedly non-planar, which only becomes a problem when you try to lay them out on a plane.


Why does laying out code on a line not cause a problem with spatial reasoning but a plane would? Are we somehow incapable of applying spatial abstractions when we move up into a higher dimension than 1?


The spatial reasoning on reading code does not happen on the dimensions of the literal text, at least not only on these. It happens in how we interpret the code and build relations in our minds while doing so. So I think that the problem is not about the spatial reasoning of what we literally see per se, but if the specific representation helps in something. I like visual representations for the explanatory value they can offer, but if one tries to work rigorously on a kind of spatial algebra of these, then this explanatory power can be lost after some point of complexity. I guess there may be contexts where a visual language may be working well. But in the contexts I have encountered I have not found them helpful. If anything, the more complex a problem is, the more cluttered the visual language form ends up being, and feels overloading my visual memory. I do not think it is a geometric feature or advantage per se, but about how brains of some people work. I like visual representations and I am in general a quite visual thinker, but I do not want to see all these miniscule details in there, I want to them to represent what I want to understand. Text, on the other hand, serves better as a form of (human-related) compression of information, imo, which makes it better for working on these details there.


> If anything, the more complex a problem is, the more cluttered the visual language form ends up being, and feels overloading my visual memory

I feel like you are more concerned about implementation than the idea itself. For me it's the opposite - I find it's easier to understand small pieces of text, but making sense of hundreds of 1k lines files is super hard.

Visual programming in my understanding should allow us to "zoom" in and out on any level and have a digestible overview of the system.

Here is an example of visual-first platform that I know is used for large industrial systems, and it allows viewing different flows separately and zooming into details of any specific piece of logic, I think it's a good example of how visual programming can be: https://youtu.be/CTZeKQ1ypPI?si=DX3bQSiDLew5wvqF&t=953


As jampekka put it, text isn't trying to use spatial abstractions, it's using the (arguably more powerful) abstraction of named values/computations. Hard to think about? Yes, there's a learning curve to say the least. But it seems to be worth it for a lot of cases.


Text doesn't use spatial abstractions.

The problem with spatializing complex relationsips becomes very apparent when one tries to lay out graphs (as in nodes-and-edges) graphically. Unless the relationships are somehow greatly restricted (e.g. a family tree), the layouts become a total mess, as the connectedness of the nodes can't be mapped to distances and the edges by necessity have to make a lot of crossings on top of each other.


I think you are limiting visual programming to some sort of a single graph that has everything together, but similar to how we split code into modules and files, visual systems can do the same, here's an example I keep showing in this thread of a visual programming system that works pretty well: https://youtu.be/CTZeKQ1ypPI?si=DX3bQSiDLew5wvqF&t=953


Writing is based on speech, which is one-dimensional. Most programming is actually already highly two-dimensional thanks to its heavy line orientation.

But most visual programming isn't trying to be a kind of "2D orthography" for language, it is trying to be a "picture" of a physical mechanism.


That 2D orthography idea is my pipe dream. Any time I am writing several similar lines of code but with variables having different length I always want my IDE to be acknowledged that some 1-symbol operators would be looking so nice if aligned in one vertical line.


So why not do 2D visual programming with access to symbols that are not spatially bound? Is there any reason why a 2D plane forces the programmer to think in terms of a plane that doesn't also apply to a 1D line of text?

It seems to me that reducing the frequency with which programmers have to drop into non-spatial symbols would be beneficial even if there are still use cases where they have to.


With text/symbolic representation I can describe any amount of dimensions in a super dense way and physicist/mathematicians are doing that, software devs as well because most software is multidimensional.

You do have graphs in mathematics but all the maths I see is about describing reality in really dense couple of symbols, compressing as much of the universe as possible to something like E=mc^2.

Graphical programming representations go the other way - it actually tries to use more bits to describe something that can be described in less bits - many more bits.


Mapping to a plane doesn't help you understand how state changes occur over time, or what is the over-all state of the state-machine is.

The only time I've seen visual programming work is when the state is immutable. How-ever it requires a major paradigm shift how one design, develop and test their programs.


it's pretty basic topology - embedding versus immersion. you cannot embed anything but the simplest software in a 2d plane. you end up having to endlessly try to structure things to minimize line crossings, make multiple diagrams of the same code to capture different aspects of it, or otherwise perform perversions just so it fits on a page.

And I live through this. most of my early career was DoD related in the early '90s, when things like functional analysis was all the rage. endless pages of circles and lines which were more confusing than helpful, and certainly didn't actually capture what the software had to do. Bever again.


I agree with you how-ever the elephant in the room is a image or topology doesn't predicate anything (A predicate is seen as a property that a subject has or is characterized by). That is the main delineation between them, and why SPO (subject predicate object) is used universality by all modern languages - abet some have different svo ordering but I digress.

The next major draw-back with visual programming is they don't explicitly convey time. You have to infer it via lines or sequence of diagram blocks. Were in a programming language you have sequential order of execution eg. left to right, top to down of of the program flow and stage change. If you attempt to achieve the same with a stateless event driven bus or message queue, you end up having to embed the sequential control flow into the event payload itself.


I would vouch a different take, visual programming makes it quite clear the mess of programs some people create when they don't follow modular programming.

Complex flows can be packaged into functions and modules representations, instead of dummping everything into a single screen.


Much like designing Integrated Circuit chips, vs. only doing basic breadboard-ing.


The spatial (usually largely 2D in IC) constraints are a huge limitation for circuit design. I'm quite sure chips (or breadboards) wouldn't be designed like this if the physical world wouldn't force the geometry.


I meant more that the very concept of an IC is a good idea, and like a good abstraction in programming.

I think pjmlp was getting at is that when using visual programming, a lot of people seem to turn off (or not cultivate) the part of the thought process concerned with creating good abstractions, despite it at least being possible to do so.


Exactly, packaging transitors into IC Modules, so to speak.


Yes, very much so.


I got the Apple Vision Pro with the hope to tinker with such things. Is one more dimension enough to "unlock" visual programming? I don't know, and unfortunately not many seem interested in exploring it.


I don't think extra dimensions help. Even simple functions have easily tens of interconnected references and representing these spatially is gonna be a mess even in higher dimensions.


I personally wont ever be interested in VR until it has "generic computing" as a major feature.

like automatically creating a 3d world showing pipes as your internet connections, some kind of switches and buttons and things as every single thing you can do with your computer including complicated ass command line and GUI windows.

And all the tools necessary to reduce or increase the complexity of it as I see fit as a user


And yet IDA and Ghidra use that same 2d representation structure for basic blocks (e.g. https://byte.how/images/ghidra-overview/graph-edges.png ) showing code flow between the blocks

I have had better-than-average success representing the high level sequence of computer-y actions using sequence diagrams, and suspect strongly my audience would not have the same comprehension if I used pseudocode or python or C++

Where I think the anti-visual programming audience and I can agree is the idea of a standard library, since a bunch of diagrams showing a "length" message being sent to a String object is for sure the wrong level of abstraction. In that way, I'd guess the DSL crowd would pipe up and say that is the same problem a DSL is trying to solve: express the code flow in terms of business nouns and verbs, and let the underlying framework deal with String.length nonsense

I've seen a hybrid approach to this in a few testing frameworks, such as Robot Framework <https://robotframework.org/robotframework/latest/RobotFramew...>


The reason it works out so well (contrary to many people's intuition) is that most programming is done in structured languages or in a structured-style these days. This significantly reduces the number of entry points (typically down to 1) for any block of code, though it can still have many exit points. Unless someone abuses something like Duff's device or uses goto's to jump into a block (and the language permits it), the flow diagrams like in the linked image end up being pretty tidy in practice.


By this logic its strictly better than a linear textual document, no? In a graph you can explicitly draw lines between connections.


But program code is 2D as well. And quite limited 2D, with 80 characters width max (or similar, but never unlimited).


Code is 1d with named links.

Visual languages trade named links for global wiring, which is very cluttered for serious problem solving.


> Visual languages trade named links for global wiring

Existing visual programming langs can definitely do "named links". A lot support named function-like blocks which are another form of avoiding wires.

> which is very cluttered for serious problem solving

This clutter is also problematic in textual programming, and is the reason abstractions and programming structures are used. Perhaps the hint here is that we need better ways of representing abstraction in visual programming.


Code is not 1d, a single if() already creates another line that makes it 2d


Mathematicians like to use parametrizations to measure "how |any dimensions" something has. If you need two indexes to traverse it (x and y), it's 2d, if a single index works best to describe it, it's 1d.

Another way to think of it is "there is semantic meaning to 'the character to the right/left of this one', but is there to 'the character above/below this one'?" In most programming languages, there isn't at all.


How is `if` related with creating a new line? And how does new line make something 2D? If code was 2D you could write code anywhere in your document without juggling around spaces and newlines


You could argue it's 1d, actually, since sequence is fundamental, not positioning on the x axis.

At any rate it's (mostly+) categorically different from what visual programming attempts. Code must be read, comprehended, and a mental model built. Visual programming is designed to give a gestalt spatial intuition for code structure -- a different kind of comprehension.

+Indent and spacing between functions/methods does count as a tiny bit of visual programming IMO


Nah, code is 1D with line breaks for visual comfort


The fact you can give things names means that there is rarely a need to follow the edges so the visualization is much less cluttered


Merging source code line by line is a solved problem. Merging visual code/graphs/graphics is often simply impossible. Also versioning and simply showing diffs become difficult problems with visual programming. That is why visual programming will never scale beyond small toy projects maintained by a single developer.

That said, I agree that visualising your code base might give additional insights. However that is not visual programming, that is code visualisation.


Maybe it's not impossible but just quite difficult? I use Houdini 'Vops' sometimes and I could imagine a tricked-up diff could be made for it (especially since it translates to vex) but you're certainly right that it's a hard problem in general!


Well visual programming is standard in Unreal projects and they definitely scale beyond toy projects with a single developer. Although Excel is the most popular visual 'programming language', the second most popular is surely Blueprint.


"if you connect to source control within the editor you can at least diff blueprints to compare changes. though it's currently not possible to actually merge them."

https://www.reddit.com/r/unrealengine/comments/1azcww8/how_d...

So it seems like basic functionality like merge is still missing from visual coding in Unreal.

But yes, there were also huge projects before the invention of distributed version control systems. But that wasn't a good world and why go back?

P.S.: Have you ever tried to merge two different excel files?


Yes, but you don't need merge to scale up things like game projects. You can just carefully partition the work between people. Perforce supports file locking for this reason. And, a lot of merge conflicts in software are thanks to the use of symbolic naming.


    <<<HEAD
Version control is not solved, and that it works line by line is part of the problem.


This is very true. Line merges do not always work well. There used to be a tool called SemanticMerge which was able to handle merging code cleanly even when the code had been refactored. It saved me quite a bit of work a handful of times (before it was taken away because the company needed a value add for their paid version control software).


A crucial distinction between visual programming and code visualization


In banking, Camunda is increadibly popular.

You model state changes visually. The model - the diagram with boxes and arrows - IS the code. And then the boxes can have additional code logic in them.

It's a giant pain to work in and debug. But the execs love it because they want to see the diagrams.


I'd rather generate data from diagram and match that against the data in actual code. This way we got all the benefit.


Know of any open source equivalents to this?


Google tells me there is something called ProcessMaker, I have never used it personally


Re-posting my 5 years old article "Rethinking visual programming with Go" here [1]

[1] https://divan.dev/posts/visual_programming_go/


I think we need functional visual programming.

It seems to me like referential transparency and pure functional composition would be a much cleaner way to visually compose functions into larger functions (and eventually programs).



The first link in the "Codebase visualization" section is broken.

The linked talk is intended to be C++Now 2018: Eberhard Gräther - The Untapped Potential of Software Visualization, available at https://www.youtube.com/watch?v=fnIFVYFspfc


Thank you! Should be fixed in a minute



I have another take on visual programming

We need programming environments that can understand both textual & visual code.

We need a new primitive which I call the visual-ast

encode the AST in html via data attributes, and have a system to ignore irrelevant html nodes, giving space for rich UIs to be developed in place of typical AST nodes.

eg.

  // textual
  1 + 2
  
  // ast
  {
    kind: "plus",
    lhs: { kind: "int", value: 1 },
    rhs: { kind: "int", value: 2 }
  }
  
  // visual-ast
  
  <div data-kind="call">
    <div data-attr="lhs">
      <div data-kind="int">
        <!-- typical encoding of a value -->
        <div data-attr="value" data-value="2">2</div>
      </div>
    </div>
     <div data-attr="rhs">
      <div data-kind="int">
        <!-- here we can use an input tag but you can do something more fancy -->
        <input type="number" data-attr="value" data-value="1" value="1">
      </div>
    </div>
  </div>

What you can do this with this AST is create rich UIs which contain the appropriate `data-attr`s, (ignoring the other elements), and now you have a generic system for interweaving textual & visual code.


If we are going that route, why not go with a lisp-like language?


The Big Tech monolith (`bingo`, `papaya`, etc.) is a fantastic callback to this video: https://www.youtube.com/watch?v=y8OnoxKotPQ. Although that video is perhaps ironically about microservices.


The problem with most visual programming is that most platforms avoid making any tradeoffs.

A good visual programming tool should abstract away complexity but it can only achieve that by reducing flexibility.

If you're going to give people a visual tool that is as complex as code itself, people might as well learn to code.

It helps to focus on a set of use cases and abstract away from common, complicated, error-prone, critical functionality such as authentication, access control, filtering, schema definition and validation. All this stuff can be greatly simplified with a restrictive UI which simultaneously does the job and prevents people from shooting themselves in the foot.

You need to weed out unnecessary complexity; give people exactly the right amount of rope to achieve a certain set of possible goals, but not enough rope for them to hang themselves.

I've been working towards this with https://saasufy.com/

I've chosen to focus on CRUD apps. The goal is to push CRUD to its absolute maximum with auth, access control and real time data sync.

So far it's at a point that you can build complex apps using only HTML tags. Next phase would be to support generating and editing the HTML tags via a friendly drag and drop UI.

Still, it's for building front ends. It cannot and will never aim to be used to build stuff like data processing pipelines or for analytics. You'll need to run it alongside other services to get that functionality.


As a Software-Developer this article made sense, although I would want it to include a few more useful UML diagrams. Models is the keyword here to me, not "visual".

    User Feature -> Feature Model -> Architecture Model -> Source Code
Speaking from a Software-Analyst perspective, models are used througout. Many complex projects need a model of functionality, to bridge understanding between stakeholders' and developers' regarding the (agreed upon) required feature in a given problem domain. The resulting models and code should be on par.

Some buzzwords to google: - Business Process Modeling and Notation (BPMN) - Model Driven Architecture (MDA) - Model Based System Engineering (MBSE)

In theory, the developer output is a function of the desired functionality. If the functionality fits a parsable model, we should be able to transcode this into sourcecode. In a nutshell this is can be a result from adoption of MDA and/or MBSE.

In a nutshell, I believe software development should happen from models to "generate" code, that then can be augumented by software developers. Updates from a model should result in updated code.


Great article. I do agree with it, while I'd add this conclusion (personal opinion):

- Usefulness of visualizing the codebase, interconnections, memory layouts - all of these are attempts of an overview of the things that aren't immediately clear. It's an attempt for an outside-of-the-box view, which becomes necessary in larger codebases/environments/companies. This is very useful for, say, an architect of a system (or systems), and even for individual contributors that are not comfortable in the current view (they may be struggling, or they may achieve better performance, of either the app or themselves with the insight gained from these visualizations).

- Actual visual programming only offers "boxes" of functionality and makes you frame everything the way it was initially imagined. It's limiting expressiveness and makes your mental model adopt the framework's way of thinking. Everyone I know has abandoned any visual programming only because they feel it is limiting them ("It's a nice toy, but it's too difficult for me to create something more complex" is a common thing I hear).


> One reason is because we think that other, more inexperienced, programmers might have an easier time with visual programming. If only code wasn't as scary! If only it was visual!

Reminds me of the Mythical Non-Roboticist: https://generalrobots.substack.com/p/the-mythical-non-roboti...


It makes me think of https://moldabledevelopment.com/

You do not code in the visual environment but it helps you to create tools to visualize exactly what you want/need to see. The interactivity is hard to beat and once you are used to it, you can quickly create visualizations that you can throw away (or keep).


I'm working on visual programming for Python. I created an Python editor, that is notebook based (similar to Jupyter) but each cell code in the notebook has graphical user interface. In this GUI you can select your code recipe, a simple code step, for example here is a recipe to list files in the directory https://mljar.com/docs/python-list-files-in-directory/ - you fill the UI and the code is generated. You can execute code cells in the top to bottom manner. In this approach you can click Python code. If you can't find UI with recipe, then you can ask AI assistant (running Llama3 with Ollama) or write custom python code. The app is called MLJAR Studio and it is a desktop based application, so all computations are running on your machine. You can read more on my website https://mljar.com


I’ve been dreaming about a visual editor for Clojure/LISP, where inputs, outputs, and side effects are clearly identified visually similar to Feynman diagrams. Gave a talk about these ideas at a local Clojure meet up a while back: https://youtu.be/edQyRJyVsUg?si=q0M0D2jfsq9GmnqB


The article mentions a couple of what I think are relevant examples: state machine diagrams and swimlane diagrams. The author makes a great point in the beginning, how programmers don't need to visualize iterator or branch logic code.

Language structures are what they are, we all learn them and know them; they're the tools we're familiar with and don't need a diagram for. What changes all the time (and what made the swimlane and machine diagrams relevant) is the business logic. This is the part that continues to evolve, that is badly communicated or incompletely specified most of the time, and that is the part most in need of increased visibility.

In my experience, this relates closely to what's really important in software development -- important to those who pay the software developers, not to the developers themselves.

I've seen lots of architecture diagrams that focus on the pieces of technology -- a service here, a data bucket there, etc etc. I think that reflects the technical person's affinity for and focus on tools and building blocks, but it puts the primary motivations second. To me, the key drivers are the "business" needs - why do we need the software to do the things, who will use it, and how.

In my work, I try to diagram the workflows -- the initial inputs, the final product, and the sequence of tasks (each with some intermediate ins and outs) in between, noting which roles / user personas execute them. A kind of high-level UML diagram with both structural and behavioural elements. I find that it raises key questions very early on, and makes it easier to then continue laying down layers of increasing technical detail.

If I were to design a visual language, this is where I would start - formalizing and giving structure to the key concerns that motivate and inform software design, architecture and development.


Problem is that “those who pay developers” don’t care to do it on their own. Heck bunch of business analysts don’t care about going down into gritty details - so even if you standardize stuff it won’t shorten the loop.

Only thing it will do it will rob developers of flexibility and level of control they can fix up any “management business grand plan”. Just like all those low code platforms do.

For me low code and visual programming platforms are the same - good ideas for someone who doesn’t understand technical details.


"Language structures are what they are, we all learn them and know them; they're the tools we're familiar with and don't need a diagram for"

If I have a nested construct of various control flow together with some ternary operators, I do wish for something more visual. Or trapped in paranthese hell. Yes I can read that. But it takes energy to decode it.

if while (x<y×2)?(((x...


So I don’t see problem with just doing quick rewrite of the code to make it cleaner.

With GIT you can commit it locally and never publish not to offend team mates :). With IDE I can reformat text and refactor it in matter of seconds. But you can rewrite it enough to understand it.

For graphical representation there are no tools that can help you and also graphical representation will most likely be only worse.


What about all the flavors of UML?

https://en.wikipedia.org/wiki/Unified_Modeling_Language#Diag...

Note "executable UML" is not just about diagramming classes and filling in the behaviors but also activity diagrams, state diagrams, etc.

My frustration with it is that the standards don't quite give you enough to make the revolution happen. For instance you should be able to construct a model for all of UML 2 based on EMOF which would let you build a rather simple and compact foundation but there is a strange misalignment between MOF and UML 2 (roughly MOF is based on UML 1) It's the kind of problem I think I could solve if I had two months to chew on it. However I know (a) I struggle to make this kind of thing turn a profit in any sense of the word despite (b) there probably being somebody out there wanting to make this happen and struggling.


Sometime in the early 2000s they generated all the flight code for the James Webb Space Telescope from UML diagrams with Rational Rose. Over a decade later they were still trying to unfuck all of it.


Any link for that? I think it's a good real-life example


Just what I remember from an internship. A quick google search just shows a few puff pieces.


I just want an IDE that abstracts my code just enough so that I can work with tokens, not individual characters. I spend way too much time fixing syntax and formatting when moving things around.


Maybe give Cursorless a try. Although they mostly show off the voice recognition, it has a keyboard interface too.

It lets you edit with high-level commands like "swap argument a with b", or "move function x before function y".

https://www.cursorless.org/


You're using the wrong ide then - any serious ide will do exactly that (have an ast representation that it uses for refactoring). Eg jetbrains ides do.


It’s a problem I think about more generally quite a bit. In general, the problem is optimizing efficiency of translating thought into machine code. Our thoughts move much faster than our hands. Our thoughts don’t execute linearly— able to seek to many random ideas and back without much of a sweat. I don’t think visual vs. text is going to make much of a difference, because both use the same interface device. Honestly, if you think about it that way, text based workloads are going to win every time because the keyboard is much more precise and fast than a mouse. My point is that this will always be an interface device problem, not a visual vs text (mouse vs keyboard) problem. FWIW I think KBM and text with modern tooling is incredibly efficient already, but my mind is open to some new interface device that changes everything. Not so sure that means VR, because VR is really just a pair of pointers (two mice)


More broadly it feels like there's a lot of potential alpha in the tooling space that just gets left on the table by the industry

I don't know what the reason is. Making tools is hard? Thankless? Tough to generalize? Devs are reluctant to adopt new tools?

Whatever the reason, I feel like workflows could be an order of magnitude more effective with better tooling. And I don't even mean the table stakes of "can install dependencies and reliably build and run on different laptops" (which some ecosystems still fail). There's huge untapped potential for "tools that tell you interesting things about your code and/or program"

My personal theory is that the design of most programming languages today makes static analysis much harder than it needs to be. And then I guess, runtime analysis is just fundamentally hard to tack onto an existing language no matter how it's designed


This article really hits home for me. I've been working on a tool called Code Charter, which aims to do exactly what you're advocating for: provide visual representations of code that make it easier to understand.

I'm taking a slightly different approach than some of the tools mentioned here. Code Charter uses successive LLM calls to distill the key patterns (i.e. business logic) in code, creating a semantic map for call graphs in a codebase. This is useful for understanding the architecture of a project and, during development, for moving between the high level and the implementation code.

It is still in development and will be released soon as an extension for VSCode.

Check it out if you're interested: https://github.com/CRJFisher/code-charter


I feel like one of the reaaally tricky bits of visual programming is avoiding bundling language and interface together.

Developers all have lots of (different) opinions about editors, but they're fortunately seperated from the language they choose. People can work together on TypeScript usong Emacs, VSCode, Zed etc and it all gels.

The second someone starts with "to use this language you have to use this editor" the bar is sooo high for developers not getting their backs up.

It would be immense to have a visual way of programming that was language agnostic, but you'd need a seriously intense protocol to make that possible.

It does make me think of things like Smalltalk and Visual Basic that where designed to support multiple approaches. Interesting how that research avenue has drifted off recently.


TBQH, I think developing bespoke visualization-to-code compilers for lots of different visualizations will probably lose to multimodal coding LLMs within the next year. Claude 3.5 Sonnet is already very good with text prompting — I'd expect another year of model releases to basically solve turning visualizations, diagrams, etc into workable code.

The bitter lesson of ML is that doing lots of bespoke things per-domain eventually loses to just using a better model. And each of those visualizations is very bespoke, and 3.5 Sonnet really feels like it's on the cusp of this stuff.

That being said, I think the core idea is right: use the visuals developers already use! This will help communicate more effectively to the models, too: there's already a large corpus of those kinds of visualizations.


I use both scripting and models to generate diagrams and I think there's space for some simple balance. Basically there will be things we repeat often enough, where we want a fast, detailed, repeatable solution. And there will be one-offs you will ask for and if it's wrong, you'll fix be hand.

There's space for both and we'll probably migrate the best ideas both ways. (can't wait for a local fine-tune which can do ad-hoc diagrams with a pleasant layout in excalidraw format) I don't think either way is going away soon.


EAGLE MODE

I'd kill for a modern version of eaglemode

https://eaglemode.sourceforge.net/


What do you mean by "modern", and in what way does the existing implementation of Eagle Mode not meet that criterion?


I feel like I ought to be able to plead the "porn definition" thing here.

YOU KNOW IT WHEN YOU SEE IT, COME ON. :)

(fair question, I'd have to think about it some more)


The point is that "modern" doesn't actually mean anything at all -- all you're saying is that something seems to you, subjectively, to be somehow newer in relation to older things, but you're (a) talking about yourself, not about the thing you're evaluating, and (b) not even describing any of the concrete qualities of the thing itself that may be informing that subjective impression.

When someone describes something as "modern", are they saying that it's large or small? Is it bright or dark? Simple or complex? Fast or slow? That term encompasses no answers to any of these questions.


Oh no. I think all I mostly mean here is "plays more nicely with current stuff."


Interoperability seems very "classic" to me -- the "modern" trend is to try to turn everything into a walled garden that diverges from common interface conventions and stops supporting standard protocols.


In some sense, all programming is visual. Our methods of programming are not designed to be listened to, or felt. They are designed to be looked at.

This has the interesting effect that we care more about how code "looks" more than necessary. We place an emphasis on code formatting, even though how the code is laid out doesn't affect how it gets executed. We often talk about "lines" of code or "blocks" of code, even in languages where lines or blocks don't carry any semantic meaning.

This is just my opinion man, but I suspect this is why Python is popular, while Lisp never caught on. Despite being grammatically simple, normal Lisp code looks like a bracketed mess. A lot of people can't get over that first impression.


Great article! I was afraid it was going to be trying to formalize a lot of visual ideas similar to what UML tried to do back in the day. Instead, this is a very nice catalog of many good visualizations that can be used as aids to the task of programming.


obviously not the point of the article but I really appreciate Unreal's visual programming flow.

I think an underrated idea from visual programming is that futzing around with linear files to try and connect a bunch of stuff together is not that fun in an event-based system. Stuff like Scratch unlock the right idea, that code structure is important at a micro level, but at a macro level it's... kind of busy-work, and it's hard to offer good structure in a way that's discoverable.

My main complaint with Unreal blueprints, though, is that sometimes you really do just want to write a bunch of code in a linear sequence. Give me a little textbox to just pop into within my diagram!


What we need is auto-generated diagrams. Not visual programming.


I really miss ObjecTime ROOM. We used it a lot for embedded software development and it basically allowed to model the "big picture" of your application in (extended) UML and add C / C++ for the details. The generated code for the rest was usually very efficient, the runtime was pretty portable and slim as well.

It became IBM Rational Rose Realtime (! the last word makes a big difference) and then it kind of disappeared from what I know. I believe they tried to integrate it with Eclipse and maybe they did but the IBM website is such a jungle to navigate, I never found something comparable.


Have you used the QP framework (https://www.state-machine.com/)? It supports UML hierarchal state machines with code generation. One embedded dev I know shipped a couple products with it and still likes it.


Thank you, never heard of it but I'll take a look at it. Looks promising on a first glance.


Great article. To circle back to the point, some of these visualisations also contain hints to how they could be used for actual programming.

E.g. changing arrow type in the ownership diagram is a good example of a change that is very annoying to do manually in Rust but it's trivial in the diagram.

The challenge is to connect this action to the whole programming workflow, which is currently optimized for text, I'd even say overfitted. Rust especially is famously difficult to process by tools. I think we might need to use a language that is specifically designed to be processed by visual tools to make it all work.


You say several times that developers say they want visual programming, but I've never heard any developer ever say this. Is there some particular context where you've heard people say this in particular?


I develop, and I'd like at least a visual of my codebase as it interfaces with other systems as part of automatic documentation. So n=1, I reckon.


Yeah, in fact, most interfaces - video editing, Airbnb, 3D modeling - are some sort of visual interface. I understand code isn’t resembling a physical object.

But books and paintings were our best approximation of reality, then technology allowed us to make movies and photos. I feel like code being lines of text isn’t the best abstraction. But finding a better one won’t be easy.


Check out Spring Modulith at https://docs.spring.io/spring-modulith/docs/current-SNAPSHOT.... Their pitch is that the application framework organizes your code in such a way that it makes the architecture of your application easier to visualize and to reason about. I haven't tried it but have seen some talks on Youtube that makes we want to give it a shot in the future.


> Developers say they want "visual programming", which makes you think "oh, let's replace if and for". But nobody ever made a flow chart to read for (i in 0..10) if even?(i) print(i).

I'm not convinced by this particular example. Wouldn't a visual programming language just represent the logic here as a pipeline connecting two more atomic operations: you'd have a visual representation where you pipe the (0..10) range through a function that filters for even values, and then pipe the result to a print function.


I do not think what they say is that it is hard to visualise it, but that it does not offer much utility to do so. A "for" loop like that is not that complicated to understand and visualising it externally does not offer much. The examples the article gives is about more abstract and general overviews of higher level aspects of a codebase or system. Or to explain some concept that may be less intuitive or complicated. In general less about trying to be formal and rigorous, and more about being explanatory and auxiliary to the code itself.


That's a good point. Functional programming is a much more appropriate foundation for visual coding. Not only because of functional operators in your example, but immutability and purity also makes things simpler when represented visually.

Circuit modeling (like in Max/MSP, Reaktor, Pd) is something that also works way better visually than imperative programming.


Dataflow paradigm


It is certainly possible and that is how most of these visual languages do it. But is that how _you_ want to program that logic?


That’s how Smalltalk does it. I believe Scheme family languages do it that way, use a generator and then filter and select. Self as well.


Or just a good visualization for list comprehension.


I built a little game around the idea of a visual representation of s-expressions. It is meant to make it easier for newcomers to learn to program. The visualization not only allows you to edit programs more easily, but also to observe it as it runs.

Heavily inspired by MIT's Lisp-based SICP course.

The game uses Rust+SDL, and is playable in the browser thanks to WASM. https://www.bittwiddlegames.com/lambda-spellcrafting-academy...


Love the idea.

Not sure if the Demo represents the final state but please add an option to adjust the text speed, importantly, including an option to show the text immediately. Yes, you can double click but that is unergonomic and adds an delay. As a fast reader, I skipped over most of the explanations because I just couldn't tolerate it. Also add the option to progress the dialogue via key press instead of mouse. And the text needs a margin left and right.

This stuff might sound minor but can make a huge difference in player retention. Definitely looking forward to trying your game out once it has been polished a bit more.


For me this is a class of “super-linter” - and potentially a unit test

If (if!) you can extract the diagram (or better the graphviz representation of the diagram) from the code, then you immediately have a test that says “hang on you chnaged the code so it no longer reflects the agreed diagram - that seems like a test fail to me”

This is not to dismiss the article - it’s fantastic and I want all of those capabilities now. It’s just to emphasise (as the article does) that in almost no cases is this visual first - it’s code first


Having diagrams and code is bad, they are redundant, they will get out of sync, you will have to figure out whether the code or the diagram is correct. That leaves us with code generation from the diagrams, or more accurately from the languages describing the diagrams, or creating diagrams from code. For understanding a code base it is the later that you want, you do not want visual programming but code base visualization. There are tools for this - for example NDepend [1] for .NET - but they are not really what you want, they are not capable of abstracting enough, summarizing a million lines of code in a handful good diagrams is a very difficult task, for humans as well as for machines.

The other way around - code from diagrams - also does not really work. Those diagrams are domain specific languages in disguise, build to concisely express a very specific programming task. This was tried in the 90s under the label of model-driven programming, you just draw all kinds of diagrams which probably meant that under the hood you were producing XML documents, those could then be fed into code generators and you are done without the need for any real coding. But as we all know, once you step beyond the complexity of hello world, essentially nothing fits any model perfectly, there is always this weird edge case or completely nonsensical requirement. And now you start messing with your code generators or wrapping the generated code in another layer to get the edge cases adjusted. Or you are writing your own visual designers and code generators.

[1] https://www.ndepend.com


Most of the time it is hard to convince "normal" people that text is the end game.

Text is all powerful, because you can encode any level of abstraction with text only. If you need access to nitty gritty details having text is god send, having to work on high level also fine you can just do that wrapping lower level stuff in descriptive abstractions.

We also already have tooling for dealing with text that go as far back as computing and if we leave a bit of space for interpretation even longer.

With GIT/VIM/AWK/GREP/SED you can be in charge of vast universes created in you mind and share/collaborate on those with others. While image is worth 1000s of words - options to manipulate/share/collaborate on text are so much better in every conceivable way.


I tried working on something like this for unit tests. This inspires me to get back to it.

My idea is to make a game out of every source code file. When you win the game, you will have 100% coverage.


Visual programming works best when it is used on a limited domain with a higher level of abstraction than programming languages such as C. For example data transformation, image manipulation, sound processing etc. More discussion of isual vs code here:

https://successfulsoftware.net/2024/01/16/visual-vs-text-bas...


I'm making two parallel attempts at solving this problem. I have some time on my hands for the next six months.

One serious as part of a long-term research project where I will be relying on some AI techniques to create the UI ( https://youtu.be/sqvHjXfbI8o?si=-PDXQes5i4JglBQj&t=411 ) and one as a game/exploration, kind of multi-layered/dimensional red-stone programming.

The first will be for tiny machine-generated programs linked together. The second is for an abstract physics game which will be for learning, fun, and hopefully some tiny profit on Steam. (Will appear here https://store.steampowered.com/search/?publisher=My64K when playable)

In, both I am adding severe constraints to the VP design but the game one will be the most interesting. I'm looking to add a kind of cellular automata mediated physics that also provides gradual automated optimization. Think programming in Minecraft with Redstone but with multiple dimensions and a regular polygon substrate. The key ideas I am exploring in both are:

1) Can we design a substrate that enforces some order that solves the tangle problem?

2) Within a substrate, can an algorithm be "crystalized" or "folded" into something recognisable by its shape?

Starting next week. Should be some fun coding.


I always like to make the destinction between models on architecture for developers (the solution description) and the models describing functionality (the problem description). One is a result from the other. Both sometimes are not written down but exists in team-members heads. Both are used for communication. Both should be in sync.

If an analyst can create the "perfect functionality model" than a developer can parse this into the "perfect software", provided he created the "perfect architecture model" to assist this. See my other post on MDA and MBSE.

When developing (parts of) complex software, I always liked doing something akin to "Unit Testing" (plus a custom UI/report) that basically checked the boxes of a Architecture-Model I wanted to adhere to. With near-instant compilation (or hot-reloading) it becomes easier to see a live reflection of the data going through your pipeline live.

A good End-To-End (E2E) test can do the same for a Feature-Model, e.g. using Test Driven Development (TDD) or any derative of this. However todays tooling is still behind on this, compared to tooling developers have for architecture. Only a few solutions exist and those are tech-giant level (IBM) or contained and highly specialized for one business sector / problem domain.


I seem to have a fascination with code visualization and exploration. Looking at a complex codebase through a variety of perspectives can have a lot of utility. I like the heatmap a lot but that's only of statistical benefit and less exploratory.

I created a rudimentary graphical ruby programming environment https://x.com/RussTheMagic/status/1809091784946921670 however I realized similar conclusions of this article; while i could do lazy evals, and method parameters and all sorts of song and dance... that's not actually what I needed - which was a way to better work with and tinker with code. I've used irb/pry extensively for this, but it's always had it's limitations there.

I wanted to be able to see what the result of the ruby code was as it was executed line by line and be able to explore the return values, so I made another version which is a realtime, visual irb called revirb https://x.com/RussTheMagic/status/1811427507784315309


A unique project I saw recently, that is developing a kind of visual programming environment with the goal of unifying design and code.

https://www.pax.dev/

> Pax: Build UIs visually in Rust

> A library for building web & native applications alongside visual creative tools

Meet Pax: https://www.youtube.com/watch?v=crI_raloHgo


> Excel Formula is the most popular programming language by a few orders of magnitude and it can look like this:=INDEX(A1:A4,SMALL(IF...

I'd argue Excel is actually a visual programming language, the formulas might look like just messy text programming when written out of context but in Excel each part is highlighted to a spatial and visual environment and can be constructed by clicking and selecting in that environment.


When you look at a visual model of a program it should be easy to trivially understand what each visual element does. If that is the case it becomes a great tool for debugging why your program is wrong because in a visual model you could spot an element and say: Why is this element here?. Perhaps simple as "Why is this element so much bigger than others?"

If a tool provides just a view of some aspect of the code it is easy. If you require that the model be editable and will synchronize the changes from code to visuals and vice versa I think it is an order of magnitude bigger problem, less likely to scale.

Should you replace all the features of your favorite programming language with a tool for building visual models which don't really support the advanced language-features at all?

Consider that there are a myriad of different programming languages each claiming to be better than the others. But if you program by creating visuals first then of course you cannot do everything with it that you can in your favorite best programming language.


It's bizarre that the author re-invents names for common UML diagrams.


Two domain focused examples: Blender material node: I could imagine certain material nodes to be highly annoying to script (or even represented differently than nodes), even something relatively simple like this https://i.imgur.com/kETcJYE.png (nodes), https://i.imgur.com/eddAhcv.png (final render) Fusion comp node: On the other hand, this seems like a lot of blocks for this simple image: https://i.imgur.com/ftfHngt.png (Cool thing about Fusion is that any node can be copy/pasted to notepad as text and vise-versa) Note: I'am neither Blender nor Fusion expert.


I always thought excel could do this, and everyone already knows excel. Toss out the vba, replace with python, every cell is a function, have a regular view, a relationship view that shows which cells connect to the current selected cell, have a run, stop pause, step/break button. Everyone would use this.


This is possible today with google sheets and javascript. The appscript integration is kind of amazing and allows for these flows effectively.


I had no idea. This sounds amazing. Do you know of any companies using this?


I've designed a data analytics/flow processing thingy with, what I call it, a visually assisted editor.

The idea was to let the user write the code (sometimes in the Intellij's smart template style, sometimes as a code) and to show them the code represented as a graph, with some hints regarding type inference, possible scheduling, optimizations, etc. Then the user can run in-browser simulations on small sample datasets either provided by the backend or typed ad-hoc and the graph will be coloured accordingly. And then the user can do real runs.

The language is t-incomplete, so the representations are really sleek. In fact, some of the graph nodes can carry t-complete snippets inside, but their contexts are isolated and purity is enforced.

Unfortunately, I don't think it'll ever become foss or even public.


Currently, I am learning learning cards (like Anki) created by an official publisher. Together with the answer there was sometimes a 'reason'. What I found interesting, and what is actually related to this topic, is that this 'reason' was often just a picture. It helped a lot because the same picture was often the reason for multiple cards.

What I want to say is that pictures can hold a lot more information than text and that creating a coherent image can increase the information density. However, pictures can have even less information density than text.

I think that most visual programming tools fail to increase the information density in a helpful way.


Graphical interfaces for Propagation Networks seems like a great idea. Propagation networks are computational methods like formulas where you can input known values for any variable, and the other variables are calculated on the fly, propagating "new" information and error values as they accumulate. Like Dataflow or Reactive Programming, but instead of a DAG (Directed Acyclic Graph) structure for the flow of information you have a true Graph that can back propagate information.

All the textual code I've seen to make these has been ugly, despite the concept being similar to formulas laid out like circuit diagrams with "cells" for the unknown values.


> Transition diagrams for state machines, Swimlane diagrams for request / response protocols.

100%. especially because these things also form the basis for static analysis; if your state / protocol diagrams let you run something like TLA you are doing really good


I build one of these visual frameworks too (https://www.ondiagram.com/) and I agree completely with the author.

People think it's a good idea because it will save them time and I get many messages about different questions, use cases and support but the reality is you trade convenience for risk that something will be missing. What most people really want is a boilerplate for the SaaS, some building blocks for their UI and to not think about deployments.

VCs love this too, and many reach out, especially a couple of years ago. Maybe they know something I don't, maybe it was just phase.


Ill tell from my world of react/rn/node/serverlerss - Many of my colleagues shared this concern as well - we can't have enough visualization on the codebases/states/data-flows.

There are three benefits - Understanding the codebase (even to plug in a new feature), Findings issues easier and I can't emphasize this enough - We can/need reach into newer complexities that we thought twice about maintainability. In summary, We can reduce threshold for developer expertise to onboard into complex codebases - freeing up, developers on the other side to tackle far greater problems.


I once saw a Clojure implementation of red-black trees that had an inline, visual diagram of each tree pattern to match on, and how to mutate the graph (at one of the Strange Loop conferences.) extremely slick, since code was generated from the diagrams (or by, can't recall which.)

that's an extremely niche case, certainly not enough to justify including visualizer widgets in an IDE, but here's another: form builders. they've fallen out of favor recently, but UI layouts are extremely visual and lend themselves well to visual programming.

DrRacket is a neat little playground for experimenting with this kind of thing.


My take, in a few years many "frameworks" for front end and back end will be LLM based. Areas where you document your tables with pure language, what tables are ok to be manipulated by what kind of user. And LLMs will make a REST/GQL service exist from all of that.

For the front end, I believe that front-end frameworks will rely on LLMs to costumize the look and feel. It will also allow you to define "forms" that know how to call the right back-end service, as long as you give it the appropriate doc links.

I think the no-code revolution may get a huge boost now that llms exist. It will take 2-3 years before this is commonplace.


There are production-level visual programming systems, Mendix is one of them, I have a friend who works on a industrial system fully built in Mendix with just a small amount of custom java code, but processes, data handling, UI all is made in it.


I genuinely have no idea how to read the first diagram presented. Maybe I'm not the target audience - I'm significantly NOT a visual learner, but that first diagram is far more confusing to me than code/pseudocode.


That system, Unit, was on the front page here about a week ago: https://news.ycombinator.com/item?id=40900029

It's got a certain aesthetic appeal but I don't find it self-explanatory either.


Thank you for the context!


I started calling this DataViz driven Development but I am more interested in DataViz of the application state, than code statistics. I am a big debugger fan, however, it falls down when debugging things that are not well represented as assignments. For example, spatial problems are better visualised with maps (rendered as images), not lists of features with numeric coordinates. Here DataViz can bridge the legibility gap between the internal representation and something a programmer can quickly grasp to find bugs.

So in my opinion the best place for going beyond text in programming is by adding custom DataViz to express program state at runtime.


We need to see the code paths that were executed for a certain request/feature/transaction.

I created Call Stacking for this visualization.

https://callstacking.com/


Nice!


The sort of visual programming the author talks about seems like it could benefit from the concepts of presentations and semantic graphical output as seen in projects like CLIM (the Common Lisp Interface Manager).


I really liked SourceTrail when it was a thing, as mentioned in the article. It's surprising that we don't have something like that for every language as a mainstream tool, to explore how bits of code relate to one another in a visual way. There are dependency graphs, e.g. in JetBrains IDEs, but none are as easy to use as SourceTrail.

You know where visual programming really excels, though? In game development and when working on other types of visualizations.

In shader graphs in particular, like in Unity, where you can very quickly iterate on how things work and look. Writing shaders manually is quite the mess, honestly, so I am really glad that alternatives like that exist, with near-immediate preview to boot: https://unity.com/features/shader-graph and https://learn.unity.com/tutorial/introduction-to-shader-grap...

In addition, visualizing state machines is also really nice: https://docs.unity3d.com/Manual/StateMachineBasics.html and https://learn.unity.com/tutorial/finite-state-machines-1

Also, tools like Blender have node based logic, which is lovely too: https://www.youtube.com/watch?v=cQ0qtcSymDI

Some might also mention Blueprints in Unreal, but personally I think that most of the time traditional programming languages would be a better fit there, but something more approachable than C++, for example, how some engines use C# instead, or maybe GDScript in Godot - many just use Blueprints so they don't have to write C++: https://dev.epicgames.com/documentation/en-us/unreal-engine/...


I love the idea of SourceTrail, and there seem to be active forks of it. Do you happen to know which one is good/trustworthy?


This appears to be the only fork that's at least somewhat active: https://github.com/OpenSourceSourceTrail/Sourcetrail

But overall, there is little activity in any of the forks, without anyone necessarily spearheading the effort: https://github.com/CoatiSoftware/Sourcetrail/forks?include=a...


UE Blueprints are pretty nice, and full on games are made with them. There are some perf downsides but for the most part its pretty sweet. Surprised this article didnt even once mention them.


Isn't "visual programming" exactly what a whole lot of 3D artists do every single day?

Whenever I watch people using Blender it sure looks like a whole lot of visual programming to me.


Blender's geometry node system is definitely visual programming.


Visual programming with connections often just becomes literal spaghetti code, we really lean a lot on linguistic abstractions to manage complexity. I played around with lots of ideas, the latest one being direct manipulation of visually represented abstractions. Fun and somewhat promising (works really well for expressing bubble and quick sort, less well for rebalancing a red black tree after a delete), but I don’t see anything panning out before AI writes all the code for us.


I just don't think quicksort is a good fit for visual programming, (not that I've ever actually implemented quicksort....).

Visual is excellent for things like "At 7PM if the enable switch is on, turn on the sprinkler".

Stuff that's very simple, but you want no chance of it going wrong, and you might want to edit it from a phone.

When you want the least powerful programming model possible, that isn't even turing complete, that's arguably not even programming and just configuration, it's great.


It really depends on your visual rep. For me, I took symbols related to CFG and RegEx representations, and then focused on direct manipulation of those representations. You can find a YouTube video of it probably, and I know I have a paper somewhere, but it’s been so long.

A conversational interface is already going to work well for simple things, but that isn’t very visual. Without abstraction, encapsulation, and generalization, are you even programming?


"Programming" is just giving a computer instructions. Visual programming is therefore just giving a computer instructions using a visual medium.

Instructions for a computer can be quite simple or quite complex. There are many ways we have to represent these instructions with text, but we also all recognize that most programs are far too complex to represent in text that a person can keep in their brain all at once.

If a person believes they remember how a program works because they read its source code in text form, they're actually mistaken. Their brain is actually tricking them: it has read lines of code, and created abstract representations of groups of those lines, and it's the collections of the abstract representations that form "the program" in their brain. Nobody thinks about a program as "line 27 operation 3". They think of that one module, with that one function, that has one in-put and two out-puts, that does a certain thingy in the middle of it, etc.

This is true for every single aspect of a computer (hardware or software). The fact that people have imagination is the only reason anyone understands how a program works.

With text programming, you're slowly "crafting" the abstract representation, one line (or concept) at a time. A graphical interface for source code is the inverse: you're creating the abstract representations first and connecting them together, and the computer generates the lines/concepts to make it work.

This article is discussing different ways of representing that process, but the general concept is identical no matter how you use it. You come up with abstract concepts and connect them. The specific ways in which you do that, how you visualize it, what kind of concepts, at what layers, etc, is certainly useful, but not really the point. The point is to be able to get your ideas down as abstracts and let the computer figure out the nitty gritty details.

Why do that instead of with text? Because pictures let you explore many concepts in many ways without needing to keep them all in your head or look for references. The picture keeps everything "in memory" (on screen) so your brain doesn't have to keep all the concepts in your short term memory, and you don't have to think about unnecessary details. It's a brain hack: the picture is a buffer for your imagination.


Here is our attempt at Visual Programming: BuildShip[0]

Top level layer is a no-code box but you can dig in a level deep to look at the code. You can edit the workflow with nocode or lowcode. AI can help with code gen. Plus connected to your database, tools like S3, Postgres, Firebase, Supabase etc and ships in one click to a Cloud platform like Google Cloud.

[0]: https://buildship.com/


There's one kind of visual programming that I find interesting:

ProtoFlux, found in Resonite: https://www.youtube.com/watch?v=qxXRbHDaMKg

Part of the point of this is making it possible to write code that does something useful while wearing a VR headset, using VR controllers. The downside is that you can tangle it up in all 3 dimensions!


Perhaps some readers are traveled enough to have actually used this site owner's fortuitous product: Yahoo! Stores. You would pick commands from a button nav at the top, and then enter values. I came to realize, this was probably pretty heavily LISP based, but at the time the people I knew were creating these "visit data" like recursive structures in Java. Good times.


> I have never seen any of these visual programming systems > even be mentioned while trying to solve problems.

I'd say that's because drawing diagrams, especially in a non-fugly way, requires concentration on the action of drawing, and is slow, whilst typing requires almost no concentration, nor even looking at the keyboard, so it's much faster, even if less pretty.


I've fallen in absolute love with the visual programming of Rivet, which compiles to JavaScript and can be run with Node. It's absolutely amazing, and it incorporates AI logic directly into it. I highly recommend it.

https://rivet.ironcladapp.com/


For programming and debugging to be really visual and really succeed, it needs to be in three dimensions. Like a shop floor that you can roam around, pull things off the conveyor, inspect and tweak them with your hands, and then put them back where they were and watch them move to the next step in the process.


As so often, it starts with the wrong term. The kind of "visual programming" the author says they want is not programming at all, it's visualizing what has already been programmed. So this whole thing is a straw man based on obvious misinterpretation of what "visual programming" may mean. Unsatisfactory and disappointing.


The distinction you are making is right. All those examples are examples of visualization, not of direct programming.

But why can't we use what today are visual representations of the programs as the actual programs? Can't we have the state transition diagram that today is a visualization of the code _be_ the definition of the state transitions? That is the question the post is asking

In other words, elevate the visualizations that we already use into programming, instead of programming with visualizations we don't use.


The screenshot appears to be from https://unit.tools Came across it recently on twitter and loved how it looked. After going through the features list, I was compelled to try it seeing how much thought and effort went into it. The site’s been offline mostly unfortunately though ;/


I've done my own "try to animate things with mermaid like lang" https://github.com/dot-and-box/dot-and-box

and got one conclusion similar to article's author as I understand it: general purpose visual programming lang is hard or impossible


My solution to this problem is to focus on structure and references only. Eliminate inheritance and other conventions. Have functions as first class citizens and procedures as first class citizens too.

A procedure is essentially just a block of code. It takes no input and returns no output. In a lexically scoped language with block scope then procedures provide scope no differently than functions. If procedures can be called from a variable reference then imagine how clear the code becomes.

Example

    if (something > something_else) procedure_here else other_procedure;
Other example

    let old = {whatever;};
    let young = {young_stuff;};
    if (age > 40) old else young;
The program code suddenly becomes more declarative to read than most declarative languages/conventions. Don't be fooled as this is still highly imperative.

When reading code becomes more narrative and less syntax it becomes more visual the same way reading books becomes more visual with practice reading books. For some people programming is already incredibly visual. It is for me, but not most people.


Algorithms are graphs, data structures are graphs, networks are graphs, relationships are graphs.

Let’s use text to describe everything!


I feel like flow-based programming platforms like that you'll find in unity and unreal engine are the future of "visual" programming languages and systems. These have adoption, and simplify greatly some of these very complex systems - particularly where you have massive paralellism.


> I have never seen any of these visual programming systems even be mentioned while trying to solve problems.

Unreal Engine’s Blueprints and Material Graphs are visual programming tools used heavily in indie to AAA games. So that’s one visual programming language used to solve problems all the time.


The "swimlane diagram" (I've not heard that term, before) looks a lot like the classic bus timing diagrams that I've used since the 1980s.

I tend to use the same kind of diagram, whenever I'm illustrating a linear flow (or, more often, a set of linear flows).

One of my most useful tools is OmniGraffle.


Swimlane diagrams are from the 1940s[0]. IGrafx trademarked it in 1996. They're often used to model process' that span people, roles or security boundaries [1][2][3].

Sequence-diagram participants[4], and gantt-sections[5] are sometimes used to represent the same.

[0]: https://en.wikipedia.org/wiki/Swimlane

[1]: https://swimlanes.io/

[2]: https://www.lucidchart.com/pages/tutorial/swimlane-diagram

[3]: https://www.drawio.com/blog/swimlane-diagrams

[4]: https://mermaid.js.org/syntax/sequenceDiagram.html#participa...

[5]: https://mermaid.js.org/syntax/gantt.html#syntax


another good one is https://sequencediagram.org/


Thanks to both of you!

These are useful resources.

I have always illustrated my systems and interactions, but have seldom used formal methods. I would use them, if they are required, but usually find that my subsets are more useful.

I did use “Booch Blobs,” back in the day, followed by UML, but always ended up using a tiny subset of the spec.


After usong Houdini for some hobby graphics, I’ve come to the conclusion that its mixture of graphical+textual programming is quite reasonable and might be somehow used in other development environments. But obviously, Houdini is a modeling software and the feedback is quite fast.


Visual programming is more or less equivalent with node graphs. Node graphs are good at doing certain things that suck to do in text (e.g. defining a ton of wild connection graphs), but suck at other things like loops, etc.

That's why I would say: why not both?


Another example of good visualization is the Node Event Loop from this JS Conf talk - https://www.youtube.com/watch?v=8aGhZQkoFbQ


Interesting that neither the original article nor any of the comments so far mention MATLAB's Communications Toolbox, which is probably one of the most widely used visual programming languages.


Merge sort (the first example countering visual programming's value) is a great demonstration of something that doesn't make sense to convey visually. I strongly believe algorithms in general are best conveyed using good ol' textual programming.

However, most dev's work isn't writing merge sort, but gluing up databases, third-party vendors, and internal microservices to perform some business logic. The amount of "low-level" coding done is low, and getting even lower with the emergence of LLMs.

With that in mind, visual programming as a way to orchestrate different components together can shine!

That's why I built Flyde - https://www.flyde.dev. An open-source visual programming language that integrates seamlessly with existing code, and doesn't replace it.


I can see why you might want to explain something visually but not build it visually - that merge sort example IMHO being a great example of the latter.


I hope one day AI can transform ULM to a codebase. It would be a big change for devs cause we spent a lot of time writing tech solution, discussing about pros and cons, designing... AI should help us finish the rest.


Where is the example of the “ very nice visual programming language” he gave from?




It's no longer there - site is unreachable. There's nothing saved on archive.org either.


It's back.


Programming visually is a very hard topic and I am not qualified to talk about it. However, for me it is extremely weird that programming is still about entering some code / functions from top to bottom in a text file.

Please let me enter code in a 2d canvas where I can put a bunch of functions that are related to each other in one corner, etc. Because I don't need to scroll through code when I know exactly where I put it on a 2d canvas because the thing has an actual shape instead of just top to bottom. Give me different views based on what I am doing. Implementing a feature? Debugging? The developer tooling is abysmal in general and I think the programming industry is doing an awful job in doing any progress as companies don't seem to properly fund these efforts. It is capitalism baby. I habe to say, I am not really a programmer and I have mental limitations which would benefit a great deal from better tooling.


Weird… I would have called most of those diagrams “Design” not “Implementation”.

This is how I work - draw a Control Flow Diagram as the design, implement it in actual code, then test the code against the diagram.


That's not visual programming, because you can't influence its behavior by manipulating the diagram. They are useful diagrams generated from code, but it's fundamentally a different thing.


I really want to be able to program with swimlanes for concurrent systems. I think BPMN is close(ish) but I would like it to be typed in a way which can check of the types of the code I write in a box.


We'll be back at IDEs that can convert to and from UML in no time :)


Maybe we need 'programmable visuals' instead of 'visual programming'? Why can't I write a simple one hundred line text file and produce a nice architectural diagram?


Have you seen PlantUML?


Unreal engine and unity have visual programming, and I think they're great. Complex enough to code very specific features, easy enough to learn when you put enough time into it.


Because flowcharts were never standardized and therefore never used in comp sci classes? And so we've lost the ability to cleanly visualize entire programs?


I've only seen this done well once. Armory with Blender will actually output Haxe code after you create a visual code block.

If I can't tweak the actual code I don't want it.


Are there any tools that can generate any of these sorts of visual aids using the existing code itself (perhaps also utilizing comments or annotations)?


What about node red?

I’ve seen some pretty sophisticated stuff done with that.


We don't need visual programming, we need moldable development like Glamours Toolkit but implemented in a language that does not suck.


> But why do people keep coming back to visual programming?

Because real programming languages are free.

That's it. That's the main reason. Sure, there are hobby projects, yes, but almost every visual "programming language" in use in the industry, is a proprietary product, being licensed or sold. It's a way to make money, and a pretty smart one to be honest: Once people invest time, resources, training, and build actual stuff in my proprietary system, they either invest a lot more money to get rid of it, or keep using it. And inertia in businesses being what it is, they usually chose the latter. What better vendor lock-in than a programming language?

-----

IMHO, no, we don't need "visual programming". I have worked with several such systems in professional settings. There is one thing they all have in common: They suck. No exceptions. It doesn't matter what level they abstract at. They are all sold on the simplest of use cases, and to be fair: They can manage them very well. And some even look pretty and are a pleasure to work with.

At the start, that is.

Then the honeymoon is over, and you hit the first snag. The first thing the devs of it didn't anticipate, or anticipated but implemented badly. And you build around that. And then the next thing happens. And then next. And the next.

And very soon, you keep running for the escape-hatch constantly, like calling into library code, or making HTTP requests to some server to do things that would have been a breeze in a real programming language. Congratulations, we are right back to "non-visual-programming", only now our real code is tangled up in this mess, having to waste precious clock cycles pandering to the ideosyncracies of a less capable system, for no better reason that to not get rid of said system. And god help you if you have more than one of these things having to talk to each other. Now you can write glue-code between to pseudo-programming systems! FUN!

And, of course, these things are usually not accessible to any established tooling: There either is no version control at all or some vendors pet project idea of what a cool (read: bad) git alternative should look like. There usually is no way to unit- or integration-test anything. The only "IDE" that can deal with them, is the one the vendor provides. Also, `grep`, what's that? Never heard of it.

"But why do people keep coming back to visual programming?"

Allow me to ask an alternative question: Why do people stick with textual programming, despite decades spent on trying to make visual programming happen?


> The first thing the devs of it didn't anticipate, or anticipated but implemented badly. And you build around that. And then the next thing happens. And then next. And the next.

Its interesting to get a perspective from someone who actually has experience with these things. Do you think there is a middle ground where the flexibility can be kept, like allowing manual code edits or use visual part for larger structural things like functions/classes?

> Why do people stick with textual programming, despite decades spent on trying to make visual programming happen?

One of the reasons is it just happened to come first with technology progress and the tools you mentioned like search, testing, version control all were developed around to support text. To achieve parity just in that will take a lot, but text has its own problems too, its hard to understand large code bases, follow all relations, design patterns etc. There is a reason we draw diagrams during development process and they are pretty hard to map into actual code.


> like allowing manual code edits

That's just another escape hatch tbh. Don't get me wrong, if the system actually allows me to define what some of them call "code blocks", "logic blocks", or whatever, where I can run actual, arbitrary, code, I am the first whos gonna thank the developer, regardless what it runs, python,lua,js,ruby, whathaveyou. (Not that it is all sunshine and rainbows with these, they usually have other limitations, like not being able to deal with actual packages, have weird ideas how external libraries can be installed, or not allow them at all...)

But while this is a really good escape-hatch, it still is just that: An escape hatch making up for a fundamental limitation of the idea behind the whole thing, and that is inflexibility. If I have to write parts of the logic in arbitrary code anyway, then usually it's more work to do that and integrate it with the rest of the flow, than just simply writing the whole thing as a python or go service.

So as for an answer to your question: No. I don't think so. If someone makes it happen somehow, I'll be very interested in how they do it, because I don't see how, other than making the exact visual-proglangs the article talks about where where we put a GUI on an actual proglang, and at that point the question is "what's the point".

> One of the reasons is it just happened to come first with technology progress

Sure, but a long time has passed since visual programming was first tried as a concept, and IMHO, if it were possible to build that tooling, it would have been done by now.

The problem, I think, is that textual code, for all the differences between languages, still shares common properties between all languages, that make it much much easier to develop tooling for it.

For example: `git` works for every kind of plaintext. No exceptions. LSP needs to be implemented for different languages, sure, but it is a common concept, relying on commonalitis between languages; e.g. all programming languages have some kind of identifiers, subroutines, types, etc. The same is true for a lot of other tools as well, e.g. patchfiles and diffing.

Visual Proglangs do not share these commonalities, not just with textual code, but with each other. Every one of them brings entirely new and different ways of doing things, and while that is beautiful on the one hand, it makes it really (and I actually think impossibly) hard to develop common tools to deal with them.


I am not sure that "We need visual programming". Just a couple of arguments against: visual programming struggles with scalability, introduces performance overhead, and lacks the flexibility of text-based programming. For me is a no-no. Maybe it helps beginners but for me it's just an additional layer of (unnecessary) complexity. The purpose of software engineering is to control complexity, not to create it.


I think the headline is talking about you...:)


No, not like that.


When something needed does not evolve, it helps to understand why, and address that.

Yes, visualizing the expected programming model helps with programming. Tech leads should convey the programming model somehow, and graphics can help. The highest-traffic use is onboarding people, but because that's rare (we hope), there's little investment in optimizing that, and it doesn't happen.

Yes, it would be nice to have live views of the system, so we can see if the actual system is what we expect. That's hard enough that it's only done for high-traffic operational issues (typically after they bite, hard).

But that hints at the real issue.

The question is really about decision support for code/design issues, graphical or otherwise.

And like all decision-support questions with non-trivial domains, what's needed depends primarily on what you're trying to decide, not the domain. So there's no one or even few diagrams that will support all decisions for a given code base or operational behavior. Performance tools show hot spots, not logic errors. However, knowing it's about decisions, you can start to enumerate stakeholders and use-cases, looking for common or critical features as the high-value targets.

Yes, the domain makes for different results, e.g., for react+js vs elixir. (I'd argue that the bulk of the benefit from type/memory-safe languages and devops comes from the issue they foreclose -- the degrees of freedom and risk they remove.)

But if you're trying to track some programming model, you end up needing recognizable/literate code, i.e., metadata for whatever your time slice (design, prototype, analysis, compile, runtime, first-failure data capture, historical trends...). And since the various levels of compilation/assembly often efface the original model, that problem becomes not only system-wide but stack-deep. It sounds intractable in the general case.

In my experience one quick strategy provides most of the benefit: an easy stakeholder-driven interface for experiments.

That means things like a REPL for language, code navigation, a generated REST API web page, tests, Go's quick performance test wrapper, a full-text search of the live model, span/tracing displays, etc. Reducing the cost of asking questions is the best thing you can do to support decisions of whatever type.

When people have different perspectives on the proverbial elephant, I wouldn't start by arguing how to draw different models but by adding more people/perspectives. Once the sources stabilize, then you can integrate them, so you'll want to make sure there are common touchpoints in each that make integration possible.


I just hope Epic someday puts a text scripting language in the Unreal Engine.


Visual programming is alive and well in industrial logic controllers.


Great blog. What framework did you use to setup your blog?


I love the return link from the footnote back to its reference.


I just want to be able to embed diagrams in my code files.


https://www.flyde.dev allows you to do that!


> Notice that the original code hasn't changed. The only information transmitted in the answer is the corrected diagram. That is because to the person asking the question, the diagram is a better representation of their mental model. As such, getting an corrected diagram has an effect on their mental model but looking at the code doesn't.

This argument (that he tries to make several times in the article) does not hold.

Almost every time, the diagram is a _lower level_ representation of the program than the code is. And then he says "look! you can't figure this out from the code" (so therefore diagrams are better), but if the code was similarly represented in a lower level, you totally could.

And similarly, if the diagram happens to _not_ contain this extra lower level information, you can't figure it out from the diagram either.

I'm not saying diagrams aren't good, they can be great, it's just the reasons in this article aren't particularly compelling. But maybe I'm missing the point.


[post author] You are right. Any "language" visual or other wise used for communication has to include the level of detail trying to be communicated. In the Rust memory layout example, Rust syntax doesn't spell out its memory layout in Rc<T> definitions.

The point though is that the two users of the language _decide_ to communicate in a visual representation! Why is that?

They could spell it out in text, adding that lower level to the text, and yet they don't. That is a sign the users are thinking about it visually and the visual representation maps better to what they hold in their head.


Most of his examples are derived from the code?

They're generally showing consequences of the code, like the layout in memory or the swimlane diagrams. This isn't quite the same thing as code.


This is a great article, thanks for sharing.


Visual programming isn't a programming paradigm - it's just a way of representing code. The underlying paradigm is the important thing. It should be decided upon first, and only then should the representation be chosen.

In my opinion, the only paradigm that visual programming makes sense for is dataflow. Unfortunately, although there were dataflow hardware research projects in the UK, USA, and Japan several decades ago which resulted in prototypes, there are no hardware datafiow machines (MIMD - multiple instruction, multiple data) today. In these, there would be multiple general purpose processors. Whenever a processor receives all its inputs, it executes an instruction, and sends its outputs to other processors. There is no flow of control, and the order of independent operations isn't determined until run time. So programs, at the lowest level, are directed graphs. The advantage of dataflow is that it maximizes concurrent operations.

The few commercially successful visual dataflow languages (e.g. Prograph, LabVIEW) aren't pure dataflow. The way they handle conditionals and loops breaks the paradigm. In any case, they run on conventional hardware, instead of dataflow hardware, real or simulated. If you design your dataflow language to run on a MIMD dataflow machine, your language would be very different and, in particular, you'll need a way to handle conditionals and loops, which can no longer be control structures, to fit the paradigm.

There are a few languages which stand out as having been particularly well designed for their paradigm - Lisp (particularly the Scheme branch), Prolog, APL, and Smalltalk. All are about as simple as they can be - "Il semble que la perfection soit atteinte non quand il n'y a plus rien à ajouter, mais quand il n'y a plus rien à retrancher." This can and should be done for dataflow, where programs are directed graphs. You can either do this as a visual language with data flowing along edges between vertices, or textually. A visual representation seems more natural, and there's only one right way to do it. With text, you have less than satisfactory choices.

You're not choosing a visual representation because it's easier for non-programmers or anything like that. The decision is imposed on you by the paradigm. If you're dead set against visual programming, you effectively rule out dataflow programming.

Are there any advantages in dataflow programming if the underlying hardware doesn't support MIMD? I have found several: it permits a certain amount of liveness; it makes type inference more straightforward, as the types at each end of edges must match, and type checking can now be done in the editor; function and type definitions, as well as data, can be represented and manipulated as labelled directed graphs, and stored as the textual representation of graphs; and functions can be compiled by the functions themselves, by running them without executing the vertices.

Some of you might remember I've been working on my own dataflow language (https://www.fmjlang.co.uk/fmj/tutorials/TOC.html, https://www.fmjlang.co.uk/fmj/interpreter.pdf). At present I'm improving how the type inference works, and making the language fully homoiconic (programs are stored in the same data structure as other directed graph data structures). This involve a major rewrite of large parts of the code base, which will end up smaller and easier to maintain.


Imagine trying to handle a git merge in two dimensions.


My take:

- Visualizing logic: I don't find it useful most of the time.

- Visualizing "metadata": Yes please.

By metadata I mean, visualizing inferred types, file/class members outline, dependencies, lints etc. Some of this visualizations are added directly to the text editor.


Programmable Logic Controllers (PLCs) that follow the IEC 61131-3 standard [0] utilize 5 different programming languages, three of which are visual/graphical, and two of which are text-based.

The graphical languages are well-suited to industrial programming contexts - and especially object-oriented modalities - as nearly everything that is being programmed is a representation of something in the physical world.

Plants have process lines which have equipment types (classes) with different variations (inheritance) which have sensors which have various configuration parameters and so on. Many of these types of equipment have common functionality or requests made of them (interfaces).

One of the IEC text-based languages - Instruction List (IL) - is deprecated, the while other - Structured Text (ST) - is Pascal-based, great for more complex math and logic functions, and likely runs a significant portion of the infrastructure you interact with on a daily basis. This is especially true if you live outside of North America, which tends to still rely heavily on ladder logic.

The three graphical languages have somewhat different ideal use cases and functionality, though for historical reasons ladder logic is frequently used to do 100% of the work when other languages may perhaps be more appropriate. The same may be said of some 100% structured text implementations.

Ladder logic (LD) was designed to represent banks of physical relay logic hardware. It is read left to right, and if a connection exists between the two, the right side is active. As such, it is great for simple controls that rely on boolean logic and simple functional logic. A good ladder program can be seen and understood and debugged quickly by anyone who has any knowledge of the underlying physical system, which makes it very popular for basic on/off systems such as motors or pumps.

Function Block Diagrams (FBD) are exactly what the name implies, and like LD are read and processed left-to-right. I like them for certain kinds of math processing such as linear interpolations.

Sequential Function Chart (SFC) is, like FBD, exactly what it sounds like. These are ideal for describing state machines with a defined start and end point, and discrete steps in between. They are not expressly limited to a single active state at any one time, and can have multiple parallel active branches. Codesys-based systems extend on the base IEC specification by expanding upon the way that actions and transitions can be defined and worked with.

Codesys-based systems also have Continuous Function Chart (CFC) and Unified Modeling Language (UML) languages. CFC may be thought of as an extension of FBD and/or SFC, and is useful for creating higher-level equipment coordination diagrams and designs. UML has class and state diagrams, and I've used it not at all, so I won't comment on it further.

[0] https://en.wikipedia.org/wiki/IEC_61131-3


One thing to consider for visual programming is whether you are describing data flow or control flow.

Most of the examples where visual programming is working well are either for data flow: shader/material graphs in game engines, compositing and other node based editors in various video,3d,vfx software. Other examples of case where visual programming is used audio software and industrial automation logic - it's similar although here it describes more of continuously evaluated process that all happens at the same time, with some parts potentially being evaluated at different rates instead of simply evaluating whole thing in a loop.

All that is more similar to functional programming. Describing control flow, implies imperative programming. You have to ask why are imperative programming languages more popular than functional ones? I don't have a good answer to this part. The same reason might also be at fault why visual programming isn't more popular, assuming visual programming is better for functional programming.

But why is visual programming bad for describing control flow? It's not like we don't have standard way for doing that. Flow chart style diagrams, is one of the basic tools used when teaching programming. I guess one of the factors might be goto vs structured programing. ~50 years ago programmers realized that arbitrary jumps across the code makes a mess, and it is beneficial to restrict the control flow to few specific patterns. Flow chart style visual programming is more or less goto spaghetti. A disciplined programmer could restrict themselves to the patterns that match with structured programming, but the same could be said about using goto in text based programming languages. Reality shows that unless they are forced, relying on individuals being always disciplined (or knowing what the best practices) doesn't go very well. It's more effective to have a first class support for the preferred structures, and remove or strongly discourage unstructured tools (even though in theory they are more powerful).

With that said structured visual programming isn't impossible. Scratch is somewhat that, but it also shows why at that point it becomes somewhat pointless. You more or less get your usual imperative text program but with additional blocks drawn around. Closer to structured code editing(completely separate concept from structured programming) of text based language than visual programming. There is still bit of hope. All the examples for DRAKON language I have seen, looked mostly structured. Not sure how it's enforced. It does have first class loop structures, but supposedly branching based loops like in the flowcharts are also possible.

Most recommendations for best practices will encourage programmers to limit the control flow complexity within a function, as having too many nested loops and branching will still make the code hard to understand. This makes me wonder about cause and effect. Do we limit the control flow complexity because text based languages are bad at expressing it, or do we use text based languages as a tool to restrict complexity (which is always bad). Or is it just a coincidence that we can use text based languages, because the amount of local complexity that we can comfortably work with fits the limits of text based languages.

One more thought is that the text based programming isn't exactly purely text based. We don't perceive the code as stream of characters, like the computer does.Remove all whitespaces including newlines and it will become nearly impossibly to work with code beyond few expressions. Elements like line indentation and syntax highlighting rely on human visual pattern recognition. So maybe there is a way for more visual manipulation even for languages that strongly resemble our current text based programming languages.


I am a tool creator (serious tools count of one) that created a tool out of genuine personal need, in reference to the note linked from " serious contexts of use".

In my case it is in context to the state machine portion of the blog.

The link to Leslie Lamport and TLA+ is informative and the initial point of my tool making was to end up with an executable specification of a state machine that could be translated to TLA+ to allow validation by formal methods.

For functional safety (IEC 61508 and subsidiary standards) this is the Holy Grail for a significant number of applications. Think things like Rail Signaling, Burner Management Systems, Aircraft Landing Gear Hydraulic Sequencing, complex machinery automation and interlocking, to name a very few examples. It can also extend to Cyber Security and transactional systems like banking etc.

In my tool I haven't quite got to automatic translation to TLA+, yet, and maybe I personally won't, because as discussed in the blog and some of it's links, the interface itself prompts a certain level of thinking that is highly beneficial. So I have done everything but an automatic translation to TLA+, but I can see no reason it is not possible, I already automatically translate the specification into controller code.

But the key point arising from the definition of state based behavior, which is what TLA+ is all about, is that implicit state is the enemy. And if all states and transitions are not defined explicitly, then the door is opened to implicit state.

The often-quoted example is the Apple Facetime bug where you could follow a specific, unintended, sequence to make a call and then listen to a callees microphone before they answered.

https://medium.com/@DavidKPiano/the-facetime-bug-and-the-dan...

Also interesting is this one where a bug involving implicit state was fixed by changing two characters, after som eeffort.

https://blog.scottlogic.com/2020/12/08/finite-state-machines...

For finite state machines the problem is the State Chart (and or UML) - neither forces an exhaustive and unambiguous examination of every transition for every state.

In order to do this, you need a State Table, and ideally a dereferencing of the inputs by grouping them into Transition Input Conditions from raw inputs.

The State Table looks like columns with all possible "From" states heading them and the "To" state underneath. Where more than one possible "To" state is possible, a new column is created with the same "From" state.

The logic is "scanned" or "polled" at regular intervals, it generally cannot be event driven, to ensure deterministic outcomes are easily known, because the precedence for a state that might solve two different transitions at the same time is established by left to right on the table as the table is scanned (could be right to left, but it needs to be defined and repeatable).

Try being totally deterministic in your specification with a chart or UML, you end up having to assign order of execution numbers to each state bubble as a minimum, and it is hard to make intuitively clear the exact behavior that will emerge in edge cases.

But, straight away any impossible state transitions are explicitly excluded and all possible transitions are easily read off the first two rows

The state columns are crossed with rows of transition conditions, which are logic conditions derived from raw inputs. Every square for a transition possibility for any state must have an entry, even if just an X to indicate not allowed or don't care.

Not only that, every possible transition has a unique square (with grid reference if you want) which you can attach a design or user comment to for explaining the reasoning for the selected transition, or for generating an alarm for that transition condition. So even a square with an X might get a comment to explain why that transition was elected not to be valid.

Outputs are driven as a feature of state, as a Moore machine is better intuitively, even though you likely end up with more states. Often the state explosion is vastly improved with hierarchal state machines, the first thing to do is split out modes and states, often giving great improvement.

You then have the basis for an exhaustive and unambiguous definition of a state machine with no implicit state. It can be an executable specification, the tool I have created allows single stepping thru the "scans" of the logic, and generates things like state trace logs etc for debugging while designing.

If you want to specify known state based behavior with no implicit state, this is by far the best available practice, only improved by a further examination by formal methods (TLA+ or similar) of the resulting specification. UML won't get you here, and no methodology of state charts I have seen or can imagine can get to the same place in a comprehendible way.

But, I find the process that the designer is forced to use to create the state machine/s in tabular form like this yields immense improvements over traditional specification methods, which usually focus almost exclusively on expected behavior only and are in the form of the "crappy narrative".

Because the target is industrial safety and control systems I put an OPC server in so the logic could be simulated and hook up to a HMI to test user interaction, if desired.

Final point is the tabular form then also leads to an extremely compact and succinct algorithm for solving that reduces the logic required to handful of bitwise logic operations. Given the maxim that errors are proportional to LOC, this means the chances of errors can be substantially reduced over any other state machine implementation I have seen, by orders of magnitude, something highly desirable for a functional safety implementation.

I have spent years on this and development of the tool, for my own use at this stage. But I am very confident there is no better practice (yet I still wait for the world to beat a path to my door), and it starts with the correct interface to describe the specification, and flows naturally from there. There are hints of similar approaches, some around for a long time, but nothing I have seen that condenses all the dimensions into one 2D table like this and allows it to be executable in it's own format.

It is a little involved to explain, often needing a couple of hours of hours of focused attention. But once people see it and get it, they usually don't want to work any other way.

But I have found a significant portion of my colleagues, who are experienced engineers, just aren't able to grasp the full extents of what it means, why and how to use it or that it can be used to describe fully arbitrary behaviors - first clue is if they do not believe that such a simple basis can describe all wanted behaviors and/or they ask why they have to use the table and why not a chart or UML. UML is just not going to get you there, this way will.

There is a lot more, but I would be surprised if many people have read this far.

But I take my chance to explain the surprising benefits of a particular UI/interface in solving a particular problem and how it flows on. If anyone is genuinely interested feel free to DM me, as I said I have a tool I made in PyQt to enable this workflow. It has been used on design of some large industrial safety systems, some projects with overall value in excess of $1B, but lends itself to any state based behavior.


All in all, this is a good post. I look forward to sharing it with coworkers tomorrow with regards to documentation improvements.

> But nobody ever made a flow chart to read

    for (i in 0..10) if even?(i) print(i).
I have. I do it at the last because I don't know the tool that well and it's never asked for in the Jira ticket. But if I have time, I 100% believe workflow diagrams should show this and should be automatically generated and attached to, eg, Doxygen comments/markdown in the IDE and shown in Doxygen (or ReadTheDocs) sites.

> Developers familiar with code already like and understand textual representations to read and write business logic

Yes, but workflow diagrams help non-programmers even more. Product teams, hardware teams, and training/reference manuals can all be better derived with all three (workflow diagrams, autogenerated/comments, code snippets)

> let me ask you: do you know exactly how your data is laid out in memory? Bad memory layouts are one of the biggest contributors to poor performance. Yet it is very hard to "see" how a given piece of data is laid out and contrast it with the access patterns present in the codebase.

Yes. I would love to see a tool that builds on a C++ language server to show memory layouts of each concrete class at the instantiation of `new` or anything which calls it such as `make_unique` or `make_shared` or similar static functions or etc. Show me call trees where allocations occur, especially ephemeral allocations!

I would love to see how many of a given object might fit on typical cache lines or page sizes, to optimize reserve sizes of containers especially memory pools. That can be done in code with sizeof() fairly easily, but it would be cool to have a it shown graphically in the IDE.

> do you know all the external dependencies your code hits when responding to a given HTTP request?

No, but I use open source software stack up to and including the kernel, and can read the whole stack if I find any odd behavior.

> Are you sure?

Yup, most recent complex issue was related to Address Sanitizer memory layout and kernel ASLR changes. Complex issue prior to that was gRPC or protobuf library crashing before main() starts. That was never fixed, it was worked around multiple times. Good luck anyone using protobuf in C++ for anything more complicated than the examples! protobuf and gRPC code is a f00kkin nightmare of bad practices!

> Didn't you notice that Bob just added a call to a rate limiter service in the middleware? Don't worry, you'll learn about it in the next outage.

Don't blame the outage on Bob, he did pretty good work. I even approved the unit tests for it. It's my fault I didn't catch the rate limiter's O(n) quickly ramps up logarithmically with connection count per instance, I forgot to ask for an automated benchmark test for it.


Far more important is instant feedback and that’s getting worse all the time: with lisp, smalltalk, Delphi, forth things were instant. With typescript or rust etc, if the project is larger than hello world, the feedback is well, far from instant. Bret victor talked about feedback, not necessarily visual programming and for some reason we are making things worse instead of better. When I show a Common Lisp dev session to young people, they think I am somehow cheating. I am, because I am not using the garbage people produce now and we are always faster (sbcl is incredible; sure you can do less latency and more performance with rust or c but then you don’t have the debugger and feedback while it takes a lot more code aka bugs and work), less bugs and not depressed while at work. We also don’t have to hire ‘devops’ to waste our profits; I like profits and I like not needing VCs.


With strong, expressive type systems such as those offered by Haskell, Rust, TypeScript, etc... I find that you front-load all of your debugging to compile/typecheck time. Instead of needing to experiment with your code at runtime through either manual or automated (TDD) tests, you are instead having a conversation with the compiler/typechecker to statically guarantee its correctness. There's just as tight a feedback loop with a typechecker as there is with a test runner; you in fact get your feedback even sooner during compile time, instead of during the subsequent runtime.

Where static verification against a typechecker shines over runtime verification against a test suite is that a test suite can only demonstrate the presence of bugs; type systems demonstrate their absence (presuming of course that you understand how to encode certain invariants into your types and don't do some foolishness such as simply asserting all values as `any`).


While I prefer expressive type systems by a long shot, I would be much more careful about it "guaranteeing correctness".

Types can act as good documentation and as a safeguard for stupid mistakes. But the worst bugs are due to logic mistakes, wrong assumptions or non-foreseen corner cases. Here, either types do not help, or designing the type system is so difficult it is not worth the effort, and makes many future changes more difficult.

In my previous company we used Scala (with and without Spark) for everything, and this setup pretty much allows you both extremes. There was always a middle ground to be found, where types were expressive enough that they were useful, but not too much that they came in the way.


> While I prefer expressive type systems by a long shot, I would be much more careful about it "guaranteeing correctness".

Yeah, you're not guaranteeing correctness. There's a quote from automated testing discussions that applies here...

> You're not proving the system works. You're proving that system isn't broken in specific ways

Likewise, for a type system, it's guaranteeing the system is correct for the specific subset of "ways it can be incorrect" that the type system covers.


Just encode your business logic in types first! Coq, Idris, or F* will certainly get the job done for you!

/s


Yes, you do if you want to make money on decades timescales instead of some grifter vc 2 year thingy.


Unfortunately you end up selling your stuff to people building missiles and bombs that way—witness CompCert and Frama-C.


Not us, but yes, you have a point.


It guarantees certain correctness it is having conversations with you about - this way more correct


Type systems in languages like Haskell or Rust are very very very far from being able to "guarantee correctness". They can only realistically be used to specify extremely basic properties of your program ("doesn't have side effects", "doesn't write memory concurrently", this sort of thing).

For any more interesting properties (say "this function returns a sorted version of the input list", or "this function finds the smallest element in the set", or "this transaction is atomic"), you need something like dependent types, and that comes with a hell of a lot more work.


I would like to see improvements in the speed of feedback - particularly from language servers - but the value of those 'basic' guarantees is more than worth the current cost. Unexpected side effects are responsible for almost every trip I've taken with a debugger in any large Java or C++ project I've ever worked on.


I can remember about 20 years ago a colleague getting quite frustrated that a bug he had been looking at for quite a long time came down to someone doing something bizarre in an overloaded assignment operator in C++.


I've seen methods with names like "get_value()" have extensive side effects.

No type system can fix bad programming.


Of course I think we have all seen horrors like that - what I remember was his completely exasperated response not the technical details of the bug.


Complexity is mostly exponentiellt worse in the unknowns and you can not graph what you already know.

The point in the article is that when we read code we need another visualization to change or mental model. I can scan code and find most bugs fast, but when you are stuck a complexity by row/column sure would be handy to find overloaded assignments.


You're missing the most basic utility they provide... that of making sure other code is calling the function with the right types of arguments. That's a lot of coverage over a language without a compile type checked type system.


That's not a utility in itself, it depends on what the types represent to know if this is a useful property or not. For example, a Cfunction which is declared as "void foo(int a)" does ensure that it's called with an int, but if it's body then does "100/a", calling it as foo(0) is allowed by the compiler but will fail at runtime. It's true that that the equivalent Python function (def foo(a)) can fail at runtime when called as foo(0), but also foo("ABC"), but it's a matter of degrees, not kind.


Fair.

However, most people are using stuff like JS and Python. For them even the non-dependent type systems are an improvement.


I agree that one should refrain from ever using "guarantee correctness" in context of type systems outside of Coq & co. But "extremely basic properties" is IMO similarly exaggerating in the other direction.

Take the "basic" property "cannot be null" for example - Considering the issues and costs the lack of that one incurred over the decades, I'd call that one damn interesting.

And Rust? C'mon, its affine type system is its biggest raison d'etre.


I do not consider neither TDD or tests being about finding or solving bugs. They are about regression and refactoring safety. They are my guardrails for when I must change or add things, or need to discover how something works.

The rest of your comment I found to be a really good point in terms if feedback justification. The IDE checking your code before compile or runtime is faster than both. Good point.


Tests for me also help me write better code. When writing tests, I'm forced to switch from "how do I implement the behavior I want" to "how can this fail to do the right thing". Looking at the code from _both_ of those mindsets helps me end up with better code.


Where I see this fall down, is when you aren't able to learn from the partial code along the way. The sooner you get an end to end setup running where input to the system causes a change to the output from it, the better you are for this sort of feedback. Note, not the soonest you get code to output. The soonest you get users giving input to users getting output.

If you are able to internalize everything, you are constantly simulating expectations in your head on what you are coding. Seeing where your expectations fall down on outputs is a valuable thing.

So, yes. If you fully understand everything already, the "paying it upfront" cost of exhaustive types is good. Amazing, even. Until you get there, you are almost certainly pushing off the feedback of learning where you do not fully understand the system.


I feel like you can partly get around this by slowly increasing type specifically over time. With strong type checking the risk of refactoring is low.


Certainly, but that goes a bit against the idea of incredibly strong types that people often visualize in their mind.

Irony being what it is, most strongly typed programs that I have been introduced to were incredibly tight knots that were not easy to refactor. Many of the restrictions in the types would be far too strong from what was needed by the program, and refactors grow in difficult to explain ways.

This is all to say, the discourse here is fraught with nobody acknowledging that "well done" programs of near any paradigm/style are well done and work. Evidence is often used that languages that allow looser ideas are more numerous than those that don't. This ignores that lack of existing programs in the strongly typed world could also be lack of ability for many people to deliver using those practices at all. Which, in turn, ignores that that may be a trade off that is worthwhile in some industries. (I suspect it goes on.)


CL is a strong and often statically typed language; there are more expressive, even Haskell like implementations (https://coalton-lang.github.io/20211010-introducing-coalton/) with instant feedback, robust (old; underestimated how robust things get when some kid didn’t roll your npm yesterday). And yep, an expressive type system, like ts, is often turing complete so it can hang, but that’s not what I am talking about; trivial ts is incredibly slow even with type checking off; now non trivial ts is a joke. Why so many fans while no one can show even 1 example that is not slow?


Eventual or gradual typing could leave everyone happy.

On the premise of the article, maybe the key to representing a program visually is a very expressive (and strong) type system. There could be a way to derive some Visual Types from good old regular types, and diagram the visual types in any level of granularity one desires.


Instead, gradual typing seems to always make everybody as unhappy as they can get.

Just like visual programming, it looks like we are doing gradual typing very wrongly.


If Typescript counts as gradual typing, which i think it does, then many seem to be very happy using it. I have skimmed over more than 1000 blogs of HN, didn't see any posts about disliking Typescript, and many people use it.

Doesn't Typescript offer seamless interoperability with vanilla Javascript?


I am very much of eventual static typing and even proofs for some parts; cl is pretty good and we have a gradual type system in our company for cl. But it’s just faster and easier to build it first and add types later we found (our company is almost 40 years old now).


If you use ocaml you get near instant compile times and types, which is excellent for quick feedback.


As projects get bigger, things might get sadder. I worked at a certain large SF company that uses a lot of ruby, so most development was repl-based too. But this wasn't a boon but a curse, as the total lack of data format guarantees on the very large, critical monorepo meant a lot of uncertainty. What does this method really do? I guess we have to run it! It worked for this specific input... but will it work for any and all inputs that get to this data path? Let's hope so, because we aren't sure! The company spent massive amounts of money on servers for parallel testing, just so that the suite could run in less than a few weeks. And when you need a large test suite to have a chance, most of the advantages of the REPL vs a compiler have been lost.

Eventually they did the same thing you can do in common lisp: Add so much metaprogramming that doublechecks invariants, it might as well be a compiled language.


Common Lisp has always allowed you to specify types anywhere you want. It's not a statically typed language but it supports types (and SBCL and other impls do static checking where possible).


So, paraphrasing:

Any sufficiently complicated Ruby program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Java


I think that OOP done right can address this problem through encapsulation of the logic.

One can make the case that a function can do the same. I agree, but a class is just a container for a set of related functions and state.

The problem with OOP seems to be that as a whole, devs are not that great at encapsulation and isolating domain logic.


The number of codebases I've worked on where developers automatically added getters and setters for all class members is too damn high.


I guess the upside is that you end up with a "compiler" that checks the invariants of your project and that domain rather than strict type safety. I love static typing, but I did try Elixir w/ Phoenix recently and was impressed that they were able to implement compile time checks like verifying that all redirects are valid routes, etc. Depending on what you're building, having a few small domain specific checks like that could be more valuable than strict type safety.


I don't know what TypeScript projects you have worked on, but every one I have worked on is instant reload all the way down. Rust, on the other hand, is pretty miserable.


I don’t know; I am a programmer but more of a trouble shooter (it makes far more) and projects passing 100k LoC in ts which I see 100s a year, are not instant, in any way. I would love to see one, but I contribute to open source projects, and it’s all slow, very very slow.


I have a 1,5M LOC game codebase, where both the server and client builds and starts nearly instantly. Probably < 3sec total iteration time just because of typing stuff in the terminal. Build system is just a casual ccache + mold. But you have to do a lot of stuff in the background during startup.


Let’s see it. To not be a total gobshite, go download these things and see what ‘instant’ really doesn’t mean. Instant means not wait 1 second; it means when I type code, it has results in milliseconds. None of these have that.

Or show me a non trivial open source ts project that’s instant; doesn’t exist and I have literally no clue why people keep defending this stuff; you didn’t make it right? I would be embarrassed but definitely not defending it.

https://www.kirandev.com/open-source-nextjs-projects-open-fo...


While I totally agree with you, I'm looking for a counter example. The only one which comes to my mind, is non trivial and quite fast for js/ts is this : https://github.com/tsoding/raycasting An implementation of raycast in a 2D canvas The dev tried to implement hot reloading, and you can see all the process there https://www.youtube.com/playlist?list=PLpM-Dvs8t0VZ08cYW6yqN...

Even though I didn't measured the loading time, an effort has been made to accelerate everything that could be accelerated.


Even the simplest type inference can cause typescript to stall for minutes. The problem is in the inference. That problem goes away when you're actually specifying types. Tooling should add typing automatically (not implicitly)


Minutes? There’s no way you’ve seen a minutes-long stall caused by TypeScript inference..I need to see some evidence on this one


I'll look into it.. But yeah, had that multiple times (older versions though). Specifying the type solves is, and I was not the only one after some googling


I'm working on 250k LoC TS project. It's instant during dev. We don't bundle during dev so the server just has to compile which ever files changed.


Well, I would love to learn how it’s possible: you have a blog or YouTube or something?


Our front end is ~200k LOC of TypeScript and all changes are instant (<1s).

TypeScript compiler is too slow for hot module replacement so it’s used only for IDEs. During development, all transformation happens via esbuild/swc and is abstracted away with Vite https://vitejs.dev/


esbuild does not do type checking. You must invoke tsc explicitly to do that.


Type-checking is helpful in your IDE (for developer hints) and in your CI (for verification), but you don't want type-checking in your hot-reloading dev loop.


I pointed that out because your previous comment could be misinterpreted to mean you do full type checking on your dev cycle, which you probably don't.


On what hardware? I have a m3 and yeah, it’s terrible with ts. Instant (milliseconds) with cl (of el even). Go is not terrible.


Same, M3. The DX within a modern frontend stack is indistinguishable from Bret Victor’s ideas (even if a decade late).


Ok, when can we meet? I have never seen it work, and, as said, I review 100s of project a year; everything ts is super slow so far. Maybe you have something.


Their hot reload cycle is fast because esbuild doesn't type check the code, it just removes types from Typescript so it turns into JS (it may do minification and tree shaking but in dev, they probably remove that). I've written some esbuild plugins and can confirm that on incremental builds, esbuild will probably never take more than a few ms even on larger projects (because it doesn't matter how big your project is, esbuild will only rebuild what changed, which usually is a few files only).


No one wants to show me though. Is that not weird? Fanbois say it is, but not one person even sends an open source project that demonstrates it. I don’t understand that? Please show me a non trivial project on GitHub that does this in ms like you say. All I try are slow af. Emacs is notoriously slow as a lisp; it is always faster, for me, than anything ts and definitely not trivial. Sbcl/cl blows it all away. Please an example in ts or it didn’t happen.

Download some horror show like Unkey and show a video of its millisecond hot reload…


You are asking for someone else to do a demo of something trivial that takes 15 minutes to setup yourself (with Vite). Nobody is biting because it’s a strange (and, frankly, lazy) request.

If you tried and still think such a setup is not possible, send me an email (in profile) and I can do a 10-15 minute show&tell.


Typescript is pretty fast no? I've worked on the VSCode codebase a fair bit and the intellisense seems pretty fast. Like under a second, which is fine.

You don't need to resort to Lisp to get instant feedback. Try Dart - it's basically instant.

Rust I will give you...


Agreed, I just prefer lisp. We use flutter a lot, but we do it in cl -> dart; it makes everyone here (again taste, not gospel) happier.


What do you use to compile CL to Dart? Or you compile to JS?


As a fellow ancient person my only consolation is that the browser-based development experience most people now learn on is (with some path-dependent ugliness) basically the experience Smalltalk promised us in the 1980s and never quite delivered.


I'm confused? Smalltalk promised image based development where you could change things on the fly. That is far from what I typically see in browser based development.


I think that's also true, but not the thing they're writing about.

My experience as an iOS developer has been mixed between places that use Interface Builder and those who create the UI in code. Something like Interface Builder is obviously a great idea for UI creation, which is why it survived so long and why Figma exists, but the actual tool itself (IB) isn't really useful for desigers who want one thing that works on all platforms. (Complaints like "xib and storyboard are hard to use with version control" miss the point; a UI designer shouldn't be seeing git-style text diffs anyway, they need a graphical diff).

Interface Builder is at least 5x faster than making the UI in code; one place in particular, I was making a native app by myself while a whole team was making a web app, and I wasn't simply keeping up with them, I also cought up the headstart they had built while I'd been assigned to a different project. The next place I joined a team and their app was a coded UI, and development was correspondingly slow. (Though how much of this is "teams are slower than solo developers" vs. my hypothesis is unclear to me).

My first coding job was before iOS existed, I was industrial placement student* in an academic research lab, and for that, my guess would be the best option mayhaps have been a mathematical formula WYSIWYG editor that output both latex and IDL.

* does this term get used outside the UK? What's the overlap between this and intern?


Haven't worked with the iOS interface builder but i worked with Delphi ages ago.

Question: how well does the interface builder thingy mix with interface in code?

Can you easily lay out the basics of an UI graphically and then add code where the builder is too limited for what you need?

Or it's the kind where if you start with the graphical builder you're stuck with just what the graphical builder can do?


It mixes well.

Apple has two UI frameworks, UIKit and SwiftUI, and Xcode's Interface Builder handles each differently. The Interface Builder for each is built into Xcode, but the UI they present when editing is quite different — I'm unclear if they're both officially called "Interface Builder" or if people like me stuck with the same name for the new one because it's a thing for building interfaces.

The older system is UIKit, where Interface Builder produces some XML files — .xib or .storyboard — and once those are loaded, they result in objects which are fully manipulable in code.

The newer system, SwiftUI, the source code is the source-of-truth for a WYSIWYG editor — any change made in the editor immediately updates the code, any change in code immediately updates the editor. That said, in my experience at least, this editor falls over quite often if you do that.


Mixes Well is an understatement. I couldn't imagine writing any more than a simple toy app without it. I've experimented with building an iOS UI purely in code, and while it is possible, it is a painful, masochistic, and slow way to develop.


The idea is to do everything trivial via the interface builder thing of the platform but not get limited by it when you want to customize the last 10%

> any change made in the editor immediately updates the code, any change in code immediately updates the editor.

Delphi memories :)

> That said, in my experience at least, this editor falls over quite often if you do that.

Not-Delphi memories.


A quick overview of Interface Builder is Steve Job's demo for NeXT --- perhaps:

https://www.youtube.com/watch?v=dl0CbKYUFTY

where they discuss how dragging/drawing allows one to make 80% of the app, and the balance of 20% is one's own code.


> What we found a long time ago was, the line of code that a developer can write the fastest, can maintain the cheapest, that never breaks for the user, is a line of code the developer never had to write.

> The goal here is to literally eliminate 80% of the code that every developer has to write for their app - because it's in common with every other app. And let them focus on just the 20% of their code that's unique and value-add to their app. That's what this is all about.

---

A video of historical interest - OpenStep's Interface Builder in 1997, the year Steve Jobs returned to Apple.

It shows how forward-thinking NeXT was. Many of its innovative user interface concepts are relevant and in use today in different guises and interpretations.

> Every single app on NeXTSTEP was built using Interface Builder. It is the frosting on top of this object-oriented cake that we have called NeXTSTEP.

In the demo, Steve seems to use the word "object" with a depth of meaning closer to what Alan Kay explained, like independent (or rather interdependent) "cells" of software that communicate to each other via messages.

> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.

On the Meaning of “Object-Oriented Programming” - http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay...

It's interesting that the Interface Builder was considered a "frosting" or visible representation of the underlying objects, which the users more or less directly created, edited, and connected together - without writing a line of code.

That ideal of user experience still hasn't been fully achieved to satisfaction, it feels. The ease and naturalness of creating software visually, as well as with other modalities, senses, medium of expression beyond text.


I'd give a lot to have a graphical development environment which:

- allowed drawing a user interface as naturally as I used to use Altsys Virtuoso (or Macromedia Freehand which I moved to when my Cube stopped working)

- allowed programming the UI as naturally as HyperCard (and to a lesser extent Lisp) "clicked" for me

- was as visual as Google's Blockly (which as BlockSCAD: https://www.blockscad3d.com/editor/ I've used a fair bit)

- exposed variables in a mechanism like to OpenSCAD's Customizer: https://en.wikibooks.org/wiki/OpenSCAD_User_Manual/Customize...

Currently plugging away with OpenSCAD Graph Editor: https://github.com/derkork/openscad-graph-editor but hoping that: http://nodezator.com/ will become a viable option (still a bit bummed that I rolled and crashed w/ https://ryven.org/ though in retrospect, maybe I should try to tie that latter in to: https://pythonscad.org/ )


If you haven't seen it, the Interface Builder got its inspiration from this Interface Builder written in Lisp: https://vimeo.com/62618532 The version in the video actually ran on a TI MicroExplorer Lisp Machine NuBus board in a Mac. There were other ports which ran directly on the Mac in Lisp.

The thing was shown to Steve Jobs and he hired its main developer, Jean-Marie Hulot.


Ah yes, I remember the funny guy facing various dramatic software difficulties in the first part of the video.

This builder does look like a precursor to NeXT's and HyperCard too. I like that the "toolbox" appears next to the mouse, instead of having to go to the top menu or sidebar to select a new tool and back to position to use it. I missed that it ran on a Lisp processor, very cool!

And that the main developer later worked at NeXT and Apple.

> Jean-Marie Hullot authored important programs for the original Macintosh, NeXTSTEP and Mac OS X platforms. These include SOS Interface for the Mac, which later became Interface Builder for NeXTSTEP (1985), and later still evolved into an important part of Mac OS X.

> He also came up with the idea of the iPhone and led the iCal and iSync development teams for Mac OS X (2002).

https://en.wikipedia.org/wiki/Jean-Marie_Hullot

..Searching for "SOS Interface for the Mac", I guess it's Sophisticated Operating System, "the primary operating system of the Apple III computer" released in 1980.

https://en.wikipedia.org/wiki/Apple_SOS


Let's not do the usual thing of allowing Jobs to get away with claiming other people's advancements. VB and Delphi by '97 already had that sort of interface builder - arguably much better ones. NEXT was trying to catch up with what Microsoft had already done on Windows by then.


That's great that VB and Delphi had one in 97, but the Interface Builder for NeXT was released in 1988.


Oh definitely NeXT was standing on the shoulders of giants, prior art, and historical precursors. He did present it well though, with a quotable and catchy demo.


Honest question as a java lover-- do rust and TS have slow compile times or something?

Because java has a robust type system and yet I've never had any issue with it's compile times. And the parsing in the IDEs is so fast and good I can pretty much get all the feedback in real time.

Ah, that's the luxury of using a battle tested language that other people have invested billions of dollars of effort into optimizing.

Do you guys not have that in Rust and TS? Bummer man. Hope you get there someday. This is exactly why I abandoned scala btw.


Yes - Rust has pretty slow compile times. It is perhaps my biggest gripe with it.

You might define java as having a robust type system, but I would rate Rust's as significantly better. Several things in Rust I miss when working in Java:

* Monomorphized types

* Sum type and product types. Think sealed classes in java, but with better ergonomics

* A really clever type system that prevents things like ConcurrentModificationException and data races (not dead-locks or race conditions generally though).

Though, IIUC, Rust made early decisions about module structure that have really hindered compile speed, not necessarily tied to the type system.

Another big factor that makes rust slow would be optimizations & slow system linkers; it doesn't have a JVM that can warm up to optimize stuff.

Source: work with both Rust and Java on a daily basis.

Edit: The way the type system works out in general makes me far less worried about making sweeping changes in a Rust codebase than in a Java codebase, but there are still logic bugs that I miss occasionally. Still, it moves quite a big bug finding from "run the program and see" to "make it pass typechecking", which is quite a bit faster than compiling; you can typecheck Rust without compiling it.


I find using cargo watch really helps with my perceived compile times - it runs a check then build on every file save. By the time I open a new terminal to test things out it is ready to go!


In my experience Rust still compiles as fast as I need it to (other than for the very first fresh compilation) and using language servers / rust-analyze I get literal instant feedback in vscode as I hit save. Not to mention autocomplete...


> Ah, that's the luxury of using a battle tested language that other people have invested billions of dollars of effort into optimizing.

Making your compiler fast is easy when it doesn't do any work. The Go one seems to be even faster than Javac.


Point well taken, golang feels painful to use compared to java due to its weak type system (to be fair, I have NOT used it since generics were added, so I don't know if that helped a lot -- that was always my main pain point, it made collections kinda useless -- it's not a coincidence that java added generics (1.5) soon after it added collections (1.2))

Anyway, I'm not sure if this is a situation where the extra power of Rust's type system could not possibly be made more performant and it's just a natural tradeoff, or whether it's a thing where it's a matter of time before Rust (or some competitor) catches up and we can have it all


I’ve got a 500 file, 100k sloc typescript project with instant feedback which makes me wonder: what is a large project?

Are most people somehow working on giant repos?


So far, no examples of an instant typescript while a lot of fans for some reason. Come on people; 1 non trivial (few 100k LoC) ts project on GitHub that hot reloads instantly. If no one can produce even one, something is off no? Making claims is easy; not cl but emacs is non trivial and notoriously slow, still ‘instant’ compared to whatever ts (not hello world, but including hello world; it’s super slow) that I found so far on the same hardware. Sbcl is cl and it’s very fast, almost always instant, obviously depending on what you are doing. CLOG is a frontend joy to work in unlike ts. But many people here say ts is instant (we use the same tooling as peeps here for ts); one non trivial example please. No one produced one and all I try is dog slow. Not even comparatively, just spinning balls on a modern laptop. Some of the most popular devs make projects that ‘hot reload’ on my and my colleagues laptops in 20 or so seconds.


Most of these solutions are based on a cognitive trap, which I don’t know a name for, so I’ll call it the “maker vs. taker” fallacy.

Person A (the maker) has a problem, and works to solve it by creating a tool. The tool is effective and person A applies it to many similar problems to good effect.

Person B (“taker”) has just such a problem and applies the tool. Unfortunately it doesn’t help nearly as much as person A thought it would. A long series of similar people with similar problems come along and fail.

What the “maker” doesn’t realize is that the tool is ineffective unless you also went through the learning process required to build such a tool, which forces you to understand the problem domain far more deeply than a “taker” ever will.

The tool actually hampers the learning process that the maker benefited from, by asking the taker to learn tool semantics instead of spending time on the actual problem.


That is an interesting framing. I think the "maker vs taker" label is great. Creative Inc[0] uses "suitcase handles" to describe something similar but more generic.

[0] https://www.amazon.com/Creativity-Inc-Expanded-Overcoming-In...


My 2 €cents from a limited and outdated experience with visual programming tools:

1. Screens have limited size and resolution, and the limits get hit rather fast. The problem can be pushed away by zooming, by maybe an order of magnitude, but for a long living project growing in size and complexity, it will not be enough.

2. In text, near everything is just a grep (fzf,...) away. With the power of regex, if needed. Do the no-code folks nowadays implement a equally powerful search functionality? I had very bad experience with this.

3. Debugging: although the limited possibilities of plugging graphical items together is like an enhanced strict type safety, I'm sure that errors somehow happen. How is the debugging implemented in the visual tools?

4. To store/restore the visual model, the tool developer needs to develop a binary/textual/SQL/... representation and unique source of truth for it. I think the step from that to a good textual DSL is smaller than to a GUI. And the user can more or less effortless use all the powerful tools already developed for shells, IDEs, editors, ....

So in my opinion most of the visual programming things are wasted time and wasted effort.


Extending #2, we've developed incredibly flexible and powerful tools for editing plain text. I've found refactoring to be a breeze with Vim macros, and people swear by Sublime's multi-cursor editing. Even with a good set of hotkeys, I can't imagine a visual environment being as smooth to edit.


There’s areas it’s good for: Beaten paths, modeling time-independent structures and things that are naturally 2D. Not so great for the final solution, but handy when you need to do quick iterations. Ex. the interface builder in xcode, the node system in blender, sound synthesis…


Speaking of visual anything. I once ran across an experiment* where instead of leaving non programmers with the GUI, they encouraged them to experiment with a CLI to do their tasks. Guess what, everyone got more productive and preferred the CLI after they figured it out.

So do you have to waste resources on pretty graphs, or is it more efficient to stick a text based DSL in there for your non programmers?

Is the current fetish for meta meta meta programming a feature or a bug?

Can't you give your business analyst a BASIC like DSL that they can use to script most of their specific problems away? Like, you know the stuff in ms office?

Of course, to do that you need to unlearn your C++ and Rust and even python while designing the DSL :)

* With web search being what it is these days, I'm not going to even try and find a citation.

** Even what TFA is talking about can be considered a DSL. A DSL for debugging and software maintenance. They say it should be graphical, but a good part of the stuff that's mentioned would work as well in text reports.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: