I'm going to disagree on that. Every day I wish I could intermix textual and pictorial representations of logic in the programming I do. In particular, any series of computations that can be represented as a directed graph, e.g. a streaming data workflow, or state machine, is much more easily understood pictorially than textually.
The flowchart and decision tree exist for a reason to describe algorithms.
> In particular, any series of computations that can be represented as a directed graph, e.g. a streaming data workflow, or state machine, is much more easily understood pictorially than textually.
As long as it is very simple. Electronics already have a highly developed visual language for describing their functions - but if what was going on inside every chip was illustrated just as what was going on between chips, it would be entirely unintelligible. Instead, any visual representation is at a particular scale, and well known portions are represented as blocks with cryptic textual notes next to each interface (ACK, EN, V0+, CLK, PT2, HVSD, WTFBBQ, etc.), labels to identify company or type, and an expectation that you know what they do or can find out on your own (and not an expectation that you understand how they do it.)
Anything simple enough to be completely expressed in human-comprehensible pictures should be exposed to the user and modifiable (even if not by using pictures, but forms.) I totally agree, if that's what you and this article are trying to say. My experiences in trying to encode actual human workflows in BPMN have taught me that when using pictures it's harder to express things of any sophistication than in words - because of words like "with" and "each" and "all", "if" and "when," and because of ways things change over time, and because of separate but overlapping/interacting flows that languages can express easily but pictures not so much.
In pictures, that involves looking all over your picture for different things, trying to figure out how to draw lines to them; if the condition is once or twice removed from the object of the search, it involves trying to untangle massive knots with your eyes and memory.
Theoretically, that is. What it involves in practice is scrawling words all over your picture (just like in a circuit diagram.) Words that express the same types of relationships over time and type as the picture is trying to express projected onto a plane, words that could be easily expanded to include those relationships and eliminate the 18 types of lines, the 25 types of shapes, the 12 types of shape borders, the 16 color schemes and the long list of rules for connecting them that had to be invented to avoid coming up with a textual syntax.
Yes, and that's how I would use a language that would allow mixed picture and text logic flows. At a certain level of abstraction block diagrams greatly assist understanding program flow, and it is redundant that I have to write the code and then draw the block diagram later for documentation.
Going back to electronics, I don't think anyone would argue that schematic block diagrams are inferior to reading the raw netlist. Similarly, I feel programming could be improved if IDEs for popular langauges would allow connecting functions together in a streaming manner. Of course, I am aware this exists, Simulink, LabView, FPGA schematic workflow, but these are niche languages that I don't work in.
"I don't think anyone would argue that schematic block diagrams are inferior to reading the raw netlist."
Well, no, but some may well argue that reading the HDL is better then a diagram. I have experience working with both the HDL and schematic in the FPGA world, and in my estimation text-based HDL is way better than working with a diagram.
Of course, YMMV, my brain may just be more optimized for processing text instead of images.
Many times I wished there was a HDL for PCB design input instead of schematic tools, now that there is often very little discrete/analog parts in a board, because large chips include almost everything needed and you mainly spend time connecting them together, possibly with a bit of plumbing but not much, and the only remaining discrete components are very repetitive: a ton of similar decoupling capacitors, pull-up/down resistors, termination resistors, a couple of voltage divider resistors and a few other common functions.
That should be a great fit for a textual HDL instead of labouring through a schematics mainly linking pins to pins again and again. It would even be much more expressive, now that we often have chips so big that they cannot be represented efficiently as a single symbol on a single sheet but are split in smaller blocks looking like HDL ports without the flexibility; now that µC, SoC and other kinds of chips have pins that are so much muxed out that they don't have a clear, expressible function, meaning that grouping them in blocks is more of random choice than a good solution. And this multiplexing means that you'll often have to change and change again the connections of your wires in the schematic, and that would be much easier to do with an HDL.
-----
That's why my mind was blown when a software job forced me to use a graphical tool like Scade. It felt like coming 20 years backwards, when in electronics HDL were not popular yet and we had to design FPGAs and such with schematics. And that was even worse, because the graphical representation looks parallel, concurrent, as a electronic schematic does, except that it doesn't match anything on the software side: first the specification/design document you have to implement is generally sequential, not concurrent, and then the generated code and the way the CPU/computer works are sequential as well, not concurrent. So you have this weird looking graphical part in the middle, which looks parallel but isn't really, and messes with your brain because you have to perpetually translate between the sequential specification to it, and from it to what it really does sequentially.
An appaling moment to do this job and discover that they considered it an improvement on C/Ada/whatever regular programming. And I didn't mention the tooling; like when what could have been a simple textual diff turns into an epic nightmare you are never sure you can trust the result, if you manage to get a result.
> I'm going to disagree on that. Every day I wish I could intermix textual and pictorial representations of logic in the programming I do. In particular, any series of computations that can be represented as a directed graph, e.g. a streaming data workflow, or state machine, is much more easily understood pictorially than textually.
I've done this. It doesn't work. You need more details than can be cleanly represented on a diagram. How do you do namespacing for example? Which database schema will that box connect to? How will it reconnect?
All 'visual programming languages' fall back to text boxes constantly. Inevitably the contents of those text boxes are needed to understand or execute the visual representation of the program.
Sure, the contents of the text box is necessary. But treating the contents as a black box is not something new, and I don't see it as a problem. That's pretty much every function call ever - all I'm interested in is the calling signature. A combination of pictures and text would suit me far better than what we have today, which is text everywhere, and diagrams / flowcharts afterwards if you get around to writing documentation.
The flowchart and decision tree exist for a reason to describe algorithms.