Hacker News new | past | comments | ask | show | jobs | submit login
Visual Programming and Why It Sucks (davor.se)
56 points by davorb on Sept 9, 2012 | hide | past | favorite | 62 comments



For a powerful counter-example, look at the industry-standard visual effects system Houdini, made by SideFX. Houdini programs (actually referred to as 'scenes') are comprised of lazily-evaluated pure functions operating on a variety of data structures representing geometry, textures, shaders and so on. The system provides encapsulation, and an embedded expression language for cases where mathematical notation is more concise.

The curious thing about the world's most talented Houdini programmers is that they don't consider themselves programmers at all; their job titles normally include the term 'artist'. I've worked with people who wrote whole procedural simulations, image processing algorithms and shaders, who swore that they could not do programming.

As a seasoned coder by the time I first encountered Houdini, it took me months to get my head around it. It took me days to put together systems artists were able to build in hours. The lesson was: my knowledge of text-based, imperative programming was not directly transferrable to the node-based, procedural world.

Therefore, don't trust the judgement of a text-based programmer who declares that the tools of a visual programmer suck - it's not just a different dialect or different language, but a different medium entirely. And, effective visual programming systems are out there, but they don't have 'programming' written on them, because 'programmers' are not the target market.


Yesterday I discovered that a friend of mine is very used to many ideas in the functional programming world. She is an accountant, using Excel.

If you squint you eyes a little, Excel can be seen as functional programming: No mutable data, very short, pure functions that transform individual variables... data abstraction using intermediary tables... filter/map using pivot tables... reduce using ranges...

It is funny how much the advice of Excel wizards resembles common programming practices.


I'd be curious to know: does your friend make much use of names in Excel (i.e. assigning names to ranges and then using the names as variables in formulas)? and does she write VB code to do things that can't be done in cells?

The analogy to FP is striking, but not quite that complete. Excel doesn't let you define functions (i.e. it doesn't let you lay out formulas in cells and then call them repeatedly with different data) or types (i.e. it doesn't let you make multiple instances of a common model). It has variables, but the naming mechanism is primitive and not well integrated, so most people just work with raw cell addresses. And it arguably does have mutable data, because you can change what's in any cell at any time. I say "arguably" because one can argue the opposite: a spreadsheet defines one big pure function and if you change a cell then you're really just calling that function with a different input or changing the definition of the function. But I don't think anybody "feels" (or, for that matter, implements) spreadsheets that way. The cell contents feel like state and editing them feels like altering a machine as it runs.

To some extent it's a matter of how you frame it. If one expanded the identity of, say, a Haskell program to include both its code as it changes over time in the editor and the data that it gets applied to over time, it would seem mutable. No one looks at a Haskell program that way because we draw sharp lines between editing it, compiling it, and running it. But in spreadsheets those lines don't really exist. The program, the data it's applied to, and the editor are all together and the whole thing is live.


She is using names a lot, but no VB.

You define your own functions with IF/ELSE and the like and then replicate that function by dragging it to other cells I suppose. So that is kind of an anonymous function that you map to a vector of data.

So instead of mutable data, lets call it side effects: There are certainly no side effects. One sheet defines a complete set of data that does not change at run time and nothing besides that sheet determines the outcome of the calculation.

Funny enough, she immediately understood the concept of mutable data and claimed that she is not thinking about her Excel sheets as mutable -- she is mostly doing data analysis and reports, so she considers her inputs very much immutable.

Well, the analogy is not perfect. But the similarity is striking.


Although the thread is old I want to add something:

"There are certainly no side effects. One sheet defines a complete set of data that does not change at run time"

-- this is not true. Just changing an input somewhere (and triggering any related recalc) counts as a side effect. Your view requires "freezing" the spreadsheet at a given point in time. This is misleading, because spreadsheets are so interactive. Even your friend, who doesn't change the inputs after entering them, still has to enter them in the first place. There are a whole lot of side effects happening while she does that.

The Achilles' heel of pure functional programming, I/O, is not a problem for spreadsheets, which are a kind of REPL. Their way of doing I/O is highly specialized (you can edit a cell but reading from a file is not so easy). But that's fine: the fact that spreadsheets aren't general-purpose computing environments is part of their strength.

It's a mistake to look at spreadsheets qua programming language without taking the UI into account, and once you do take it into account, the FP analogy breaks down. In my view, a more reliable way to understand spreadsheets is as belonging to the class of interactive computing environments populated by Lisp, Smalltalk, APL, and Forth.


I use Excel a ton and do quite a bit VBA programming in my workbooks. I don't know what I'd do without names. I cant recall the last time I used a raw cell address. It is a nightmare trying to keep track of the ranges and gets exponentially worse when you have a multisheet workbook that has many references to cells on other sheets.


What's the maximum number of named ranges you've used in a spreadsheet? Don't you find the interface for defining names, or viewing them, inconvenient? It's completely divorced from the rest of the program IIRC.


It's a question of problem domain. Would I rather design a shader using a visual graph system or text? Probably the graph system. Would I rather build a user interface using Interface Builder or pure code? Probably Interface Builder. But there are plenty of cases where a visual tool will be more cumbersome than text, perhaps because no visual tool has yet been built to deal with a given problem domain, perhaps because it's not really amenable to visual representation.


Another one: http://www.mevislab.de/home/about-mevislab/

I think one of the advantages of a visual paradigm (at least for MevisLab) is that it forces modularity of all components. To avoid the wire-frame spaghetti, functionality can be encapsulated in containers with well-defined input/output ports, which makes that functionality trivially reusable in other contexts.

The productivity increase over competing frameworks is remarkable - I've seen a feature-complete demo delivered to a physician in 3 days. An experienced MS-level engineer took 3 months to implement comparable functionality in a Python/C++ framework.


Visual programming is excellent for certain niches, such as audio/video signal processing and the associated automation logic. Examples include Blender's and Maya's compositing nodes systems, virtual modular synthesizers, so-called "open architecture" sound processors, Pure Data, Audiomulch, and many others (including my own Palace Designer for my startup Nitrogen Logic).

When your problem consists mainly of taking some kind of input data, slicing it, processing it, altering it in separate branches, and recombining it, a dataflow-based visual programming system is an incredibly useful tool.


Yep, for this "pipeline/processing" type of workflow, I've seen several good examples of visual editors. It feels like it should be possible to implement most of the standard unix commandline tools in this kind of environment. With a system like that, you'd be able to let users create essentially shellscripts, but with a lower barrier of entry.

I'd really like to see that project. Time to stop procrastinating, I guess :)

Edit: This kind of system would also facilitate having multiple inputs/outputs and typed data - stuff that's fairly complex to get right in the actual commandline.


A pity that it's just another opinion about that topic. The main reason why "method X" doesn't feel comfortable the same way as coding is, that you code most of your time and trained your brain to think like that. If you teach your brain to work in other methods to generate programs, then you might actually feel more comfortable doing them instead of coding.

I think the 2 main reasons we still write text to talk to our computers lies in a) the training level most creators of new programming languages have in text I/O and b) that it's much easier, at least for computers and software nowadays, to parse text compared to parsing pictures. In fact I'm 100% sure that all visual programming software these days will underneath translate your graphical programs to text or byte strings. So you still interact textwise, it's just hidden underneath.

And because we just talk opinions here, let me add that: I believe that a way to programm with speaking or visual paradigms (maybe not the ones we have today) will probably improve the usability of programming, because untrained people can relate easier to things they can see and "touch", which is in my believe the reason, that the mouse is much more often used then the keyboard, even though it is often less efficient on current computers.


Yes, I don't think we are close yet to anything revolutionary, but, at least intuitive, the combination of visual programming, speak, AR, ILP (and other AI automated programming) can result in something more efficient and/or less error prone in the future. Nothing we have now in visual programming has that but we need to go somewhere from here and textual language are not bringing us enough advantages fast enough to curb the millions of software bugs produced every year.


History is littered by people that pompously proclaimed that X is impossible to achieve since XXX attempts have been made that failed.

http://en.wikiquote.org/wiki/Incorrect_predictions

http://www.lhup.edu/~dsimanek/neverwrk.htm


The difference is that Frederick Brooks didn't say X is impossible to achieve since XXX attempts have been made that failed.

He said that X is most likely not possible because of Y (where Y is a function of X).


Actually, he oversaw the worst-managed large-scale programming project in history, and instead of saying "I/we screwed up", he decided to write a book proclaiming that no one could have possibly succeeded, implying he was not to blame.

A pretty stupid conclusion after exactly one attempt at large-scale programming, and IMO at least, it's set the industry back horribly. Everyone now just assumes we can't fly, and focuses on making horses run faster.

</rant>


Wow. First of all, the guy wrote a book over 30 years ago that, 30 years later, is still one of the most relevant and truthful books in the industry. Secondly, I've met the guy, he's extremely humble. So not likely.

Finally, you obviously haven't actually read any of his work, if you think he derives his experience from just one project.


I wasn't around then, but my understanding is that IBM OS/360 was hugely successful and its successors are still around like 40 years later. It took a huge amount of time, but was ultimately successful. I don't think you have the story quite right.


> he oversaw the worst-managed large-scale programming project in history

Really? Have you been following news about gigantic failing IT projects lately?

OS/360 shipped and later versions are currently running mission critical apps and making piles of money large enough to sustain a line of high perf custom CPUs and computers exclusively for running that OS.


Do you care to go in to more detail?


Industrial control systems are often excellent candidates for visual programming, particularly because much of the logic is conditional and combinational in nature (that is, IF X AND Y OR (P OR Z AND NOT Y) AND NOT ALARM) THEN OPEN VALVE), so they are often programmed in a language called "ladder logic" [1] or "relay logic". Such an approach lets you very quickly see what condition is "breaking" the chain that enables the output to actuate.

After some Googling, here's a screen capture [2] of what it looks like in practice (from the vendor of RSLogix).

On the other hand, if your logic is highly non-combinational (requires latches or states), then such a language can indeed become incredibly annoying to both write and debug, in which case you can often elect to use a traditional programming paradigm (as shown in the top right of [2]).

[1] http://en.wikipedia.org/wiki/Ladder_logic

[2] http://www.rockwellautomation.com/rockwellsoftware/design/rs... The picture for the main sequencer might be read as "If you're not out of Ingredient A, and you're not out of Ingredient B, and you're not out of Vanilla, ...". There needs to be a "lighted green" path from the left to the output for the device to turn on; if it doesn't, then non-green element(s) are the reasons why the output is blocked.


Unreal's Kismet (http://www.unrealengine.com/features/kismet/) has been quite successful. There are a lot of little bits of scripting that go into a game where this kind of language makes perfect sense.

Make a whole game with a visual language? No thanks! Give game designers the ability to just drag stuff around when they need to tweak behaviors? Yes, please!

A note about "game designers" might be in order: a game designer can be someone with little technical experience. "Game designer" is a broad spectrum of people from almost-artists (some kinds of level design for example) through full fledged programmers (sometimes titled "Technical designers").


I use Labview.

For anything reasonably complex, it is by far the slowest language I've worked in.

It's still 100% worth it for its use case, and 90% of what you'd want it for is not all that complex.


Labview was also the first thing that came to my mind. In my experience, apart from complex number-crunching (for which it does have a stripped down textual-code vi), I found it easier to work with than java. I think that is more related to my hatred of unnecessary state (which an OOP only model supports). In Labview, the easiest way to store state is to pass the output of the vi (function) into its input at the next time it is called. I still don't enjoy using Labview, but I haven't felt like there was something I couldn't do in it (in terms of resulting program).


I found [LabVIEW] easier to work with than java

This is what we call "damning with faint praise".

I agree that the de-emphasis of state and the stream-based programming model are wins for LabVIEW. (Though the real win is: It seems to support every piece of lab hardware by one means or another. This is a difficult feature to give up.) It's the mere mechanics of LabVIEW programming - the need to massage a layout of wires that were difficult to draw, read, print out, diff, or version-control - that used to drive me crazy.

Though the last time this came up on HN a National Instruments employee turned up to say that things have improved since last I used LabVIEW. Apparently they've lavished attention on the design problem of virtual-wire layout and tracking.


The new versions of Labview are pretty good about layout. The auto format command (^e I think) usually produces easy to read code. Diff is still pretty bad though.


Labview was the first thing that came to my mind as well. everyone thought I would love it (I am an electrical engineer) and assumed i would relate to a schematic representation to code. some may like it, but I find it far too verbose to draw code with wires and functions. I find that I far prefer writing a for loop with text than putting a window in and hiding everything behind drop downs, hidden windows and wires that go off the screen.


The perks come in where situations like "I want to set up an experiment in the lab, but I want to monitor it from home so I'm not stuck there all weekend" are only slightly more difficult than Hello World.

Academic classes that introduce Labview tend to focus on basic programming tasks. This is not a good way to showcase its benefits.


My experience with Labview is that it makes (relatively) difficult things easy (creating GUIs), and easy things difficult (complex number crunching).


I was forced to use it for some time. I used to bang my head in frustration working out how to wire something up that I'd know how to do in two lines of code.

That, and I was using the built in fitter to fit a straight line, that would randomly screw up the fit on data almost identical to that which it worked on the previous iteration.


Oh god, the frustration.

The main problem, I think, is that it retains the vestigial features of being designed by electrical engineers who just refused to take any lessons from the 50+ years of experience we have had with programming languages.

---> Readability

You fall into the trap of thinking a flow diagram is an easier way to understand things. You are wrong. On any project that is not trivial, it's impossible to look at things and grok what they are doing or even what they are part of, or why that cable is going into that other system. You want to know what that cable does? Better follow it through the wormhole to see what variable it is! Realized you need a variable from somewhere else? Good luck bringing it over without a) creating a rat's nest or b) making it not obvious that it is coming from miles away. I could show you a simple for loop with an iteration variable, building up an array as it goes, and it'd take you a minute to get what it does. Readability sucks. Ignored lesson #1.

---> Refactoring

Fundamentally, the way you build code in labview tends to cause the problem where stuff gets added on all the time and it's all spaghetti. Granted, you can split things out into smaller modules initially (and interfacing them sucks). But there is no simple way to refactor things after you've written spaghetti. And because it's hard to manage external imports, you will write spaghetti. Ignored lesson #2.

---> Spatial Issues

The spatial paradigm is a culprit in many of these cases. Part of the problem is that it's hard to create spatial scope and visually import variables from elsewhere, all the while making the imports clear and easy to inspect all at once.

Don't even think of keeping things tidy, wires will be everywhere, and the auto-arranger doesn't work well.

You want to add code? Well, you should have left space for it in the first place! Now you have to move all your code aside to make space for the new stuff. Are you fucking kidding me?

So, you want to copy and paste. Good luck. All your wires disappear and since you haven't memorized what the code looks like, and you have to understand the code again before rewiring everything. Just like how you have to rename variables after copy-paste except dumber and takes much longer. This is why refactoring is hard.

---> Modern Tools

Version control you say? Forget about it! No diffs, no patches, no merging, no collaborative coding. Do NOT EVER EVER use anything but the master copy of the code (tm) or otherwise you'll end up with two versions and no one will know what is in which etc. Unit testing? Costs $1500 and is total junk regardless. Continuous integration? What is that even?

---> The UI

The UI has its own misgivings. Slow as balls even on a fast computer. Limited undo (are you shitting me?). Undo history used to disappear after you saved, maybe they fixed that. The window management is a mess. Sometimes when you try to quit, it says you made changes but you have no idea what changed. Good luck if you accidentally dragged some fucking wire somewhere so that now your robot flips out when reading gyro input. You can't know.

---> The Libraries

Extremely verbose and difficult to use. (especially math and strings).

Labview is just an all-round bad experience. Like I said, they've learned NOTHING from modern programming. And yet they tout this ignorance to you (and unsuspecting electrical engineers) with pride. This is just what I have off the top of my head. I don't even have my hate-list with me (yes, I compiled one to let off steam every time something was frustrating).


Thank you for saying all this so i don't have to. I also have a hate list which i don't have availible at the moment but i can easily add a few things to your list from the top of my head. Frustration really is the right word when working with labview.

The lack of diff/merge alone is reason alone not to use it. Add to this the fact that you can hide wires and blocks under other blocks, often by mistake, and you are left with a mess that can't be diffed neither by machine nor visual inspection.

Imagine a programming language where you can remove code by putting tipp-ex on it and then adding new code on top of the tipp-ex and both the new code and the old code would run.

If/switch-statments are blocks where you can only see one case at a time. Again, visual inspection takes forever since you have to manually click through every single case. Imagine a code editor where code folding is on all the time and you can only expand one section at a time. Add nested switch-statements to this and you will have a lot of fun.

The refractoring is as you say a nightmare, you start using a few blocks for a task that you later realize you have to use in 10 places so you make a subvi for it (labviews notation of functions). Now you have to manually rewire everything! Ahhh. Find and replace, forget it.

Only way to use it is the mouse, and you have to use it alot. Just accept the fact that you will get premature RSI.

Dataflow programming. Haha. Every program of scale i've seen uses the "error-in-error-out" pattern connecting every block into a fixed sequence of execution. For those of you who haven't used labview, the error-in-error out pattern is basically a big "master-wire" going through every single block to force the execution to resemble a sequence and if there is an error in any block the output will contain an error. With Dataflow programming the program is supposed to choose execution path automatically by lazy evalutation so you don't have to, but with error-in-error-out you loose all that benefit.

Readability...yes. Any function that is not a simple mathematical operation or some standard block from control theory is impossible to figure out how it works without opening up. Have you ever seen bad function names? Wait until you see badly drawn 32x32px blocks trying to explain 1. What they do and 2. What the different inputs/outputs mean. Try to draw array_map(), substring(), implode() or any other favorite function from your standard lib into a 32x32px icon. You will probably end up with a few boxes pointing at each other. And EVERY function will look like this! I tell you, putting chinese characters in the boxes would be easier to understand.

Another funny thing is opening a vi that someone with a 32" inch 6400x4000px resolution created when you yourself is on a shitty 15" laptop. You have to scroll back and forth both vertically and horizontally like a mad man. Minimum requirement for being close to productive is 2x 24" monitions.

-----------

It does however have some bright sides i havn't seen in any other languages. One thing i like is that if-statements always work like the ternary operator(disregard the ternary operators ugly syntax in most languages). You are forced to assign the output to a variable in all cases as soon as you assign to it in any other case. This makes it impossible to miss assignment for odd cases and you can always trace the dependencies and follow the source of the variable. Basically you create a new variable every time you go through an if-box where assignment happens inside the box and after that you don't have to worry that your old variable is used anywhere anymore.

Another nice thing is that once you realize wires=variables, you can quickly determine how much state your application uses, not for performance reasons, but more for logical reasoning.

The type system is also interesting. The best way to describe it is typesafe ducktyping. Say you have a function sum(x,y) { return x+y }. If you wire an integer to the x-input the output will automatically become an integer, if you wire a float to the input the output becomes a float. And this is determined at compile time! If you wire the float output to another int output you will get compile time error! Actually even before compile time because you will see the error as soon as you connect the wires.


Wow. Great commiseration. We should share notes sometime to co-write a comprehensive labview nags blog post :)


I've seen a few good examples of visual programming that work.

(1) Scratch. Scratch is just so much fun. When I have weeks where I'm totally burned out from coding, I can really enough playing around with scratch.

(2) There are quite a few proprietary systems for ETL and data analysis that support the creation of processing pipelines. Often you're snapping together relational operators much in the style of programming Apache Pig. I know data analysts who love working with these systems and when I've needed to make small changes to their analysis pipelines I've found it wasy.


If you like scratch, check out BYOB (http://byob.berkeley.edu/). It is just like scratch except you can "build your own blocks" - i.e. abstractions.


At work, we had a short experimental phase with using a visual programming tool to create web applications[1]. For simple CRUD applications it was actually really nifty. But as soon as you get to more complex logic, it turned into a nightmare.

I think part of the reason is that a chart or a graph is great to give you an overview, to transport the gist of a message, to convey informal information. Programming on the other hand is all about hard, precise logic. Text is a much better medium to present this. And it's much easier to divide and conquer in text form, with meaningful function and variable names.

For example, how would you represent the following simple code snippet graphically?

if (!customer.hasOrderedSince(DateTime.Now.SubtractYear(1)) { offerCoupon(10); }

In the aforementioned platform, this would be one rhombus (representing the if statement). You'd have to click it to see what the condition actually is. If you have a complex flow, you'd have to click all nodes to see what's actually going on. In text form, it should ideally fit onto one page and is much easier to grasp and debug.

Someone brought up that GUIs and mouse input has been successful, and that's true. But there is big difference between using a computer, doing small atomic tasks with it, and programming one.

[1] http://www.outsystems.com


I am an "enterprise" programmer currently working on an invoice automation project. Since invoice approval processes vary across clients, we provide a (3rd party) workflow designer to personalize/customize the process.

And it sucks.

It sucks exactly for the reasons in the article. It sounds great and looks great for simple processes. But that isn't the reality of approval processes and integrating with existing systems. The complexity of these "visual" workflows grows exponentially with the complexity of the real process. Complex to the point where it can no longer be easily tweaked and maintained by the original designers/maintainers much less the client.

Yes, there are some systems that apparently work great: visual effects (which sounds much more functional (in the programming sense) than real world programming), and Scratch: to help kids make games. Scratch is inherently simple and I bet if you asked someone who has a history in programming what it is like to scale a program up in Scratch, you find the same thing. That it starts off great, but as complexity of requirements increase linearly, complexity of the "program" increases logarithmically. That they wish they could just port it to something like Python or at the very least, embed a language in it.

(Side note: I wonder if this says something about (natural) language as well. No natural language looks like a visual workflow. Not even hieroglyphics. You gain much more expressiveness and flexibility by the simplicity of the concepts. It doesn't get much simpler than a single linear string of words. And I think programming languages gain the same expressiveness and flexibility by having those characteristics.)


Just because it hasn't been done well doesn't mean it can't be. And as other comments have pointed out, visual programming works extremely well in domain-specific contexts; the challenge is to translate that to general-purpose programming.

The core of the problem is twofold: first, visualisation of higher-order functions in a way that doesn't suck; second, an easy way to move back and forth between details and the big picture. Those are two things at which text is already very, very effective.

Though the project is still very young, my horse in this race is a statically typed concatenative programming language. Visual programming is naturally point-free, and visualisation of higher-order functions is actually rather easy if you have static typing. Most importantly, the program can be edited textually or graphically.

I regularly see comments and articles here that indicate a desire for such a tool. They give me the resolve to keep working to fill that gap.


I'm pretty sure Bret Victor ("Inventing on Principle") has a visual programming pet project that deals with HOFs. Have you seen it?


I've seen some of his work; didn't think it could handle anything higher-order, at least not in the way I mean to. I'll have a demo soon enough.


I find myself wondering if the preference, in drawings, for planar graphs is more restrictive than what is readily expressed linguistically.


" But let’s be honest: very little software in the real world outside of a spreadsheet document is actually that simple."

http://scratch.mit.edu/

More than 2.7 million pieces of software, all created with a visual programming tool, and mostly by little kids.

Yes, most of them are simple (though there are some impressively complex projects on there). So what?


Here's Tower of Hanoi written in brainfck. http://www.clifford.at/bfcpu/hanoi.bf

Scratch is awesome and as I mention in the article, one of the strengths of visual programming is that it requires much less training before you are able to actually "do" stuff. However, just because you can* do something doesn't mean it's the most optimal or the easiest way to do it.


It can the most optimal and easiest way if you just want to solve your problem without spending several years learning how to program.


Since when does it take several years learning to program to solve a problem?


http://scratch.mit.edu/channel/featured

1) Pick one of these at random. 2) Assign a person who knows nothing about programming to write it in C. 3) See how long it takes.


Also 4) the person you're assigning to the job is a third grader.


Why does it need to be C?

Why not python or javascript?


Feel free to run the experiment. No matter what text-based language you use, it's going to take a helluva lot longer than Scratch. That is the actual point here.


>Yes, most of them are simple (though there are some impressively complex projects on there). So what?

So you just repeated the complain of the author (that it only works for simple stuff), and yet you act like you challenged it.


I'm pointing out that the author's claim that "very little software is actually that simple" is bogus. Which it is.

I would bet that in terms of sheer number of procedures, the majority of software written today consists of spreadsheet and word processor macros.


Playmaker is not designed to replace all text-based programming in Unity but replace it in the use cases where it excels. For what most people want to do most of the time in this context, a visual language is easier to write in, debug in and maintain.


I agree with you that Playmaker isn't designed to replace all text-based programming and I believe that the tool works well for what it was designed for, namely to allow artists to do a lot of stuff without having to wait for the programmers and to allow people to do all of that, for lack of a better word, "brainless" programming that most programming actually is.

But when it comes to debugging and maintenance, I'm not so sure. Especially when it comes to debugging; I see no reason why that should be easier with a visual language.


It's easier to debug because this kind of logic is asynchronous over a human-readable lengths of time. With a visual language you get at-a-glance information about what step is currently playing and it's much easier to see what's going wrong and why.

It's also not about "brainless" programming or for artists. I find that quite an elitist attitude. It's about getting to the core of the problem without the unnecessary boilerplate or engineering requirements that complicate solving the immediate problem in the quickest, safest cleanest way.

At some point of complexity you end up fighting the limits of the system. When you hit that point then visual programming really slows you down. Being able to use text-based programming as a fallback is a real advantage there.

It's the same debate that people have with DSLs. Your DSL is turing complete. So's C++. C++ is more flexible, faster and more powerful. It has powerful, mature compilers and debugging tools. So clearly you should always use C++ instead of any DSL.



Textual and alternative graphical representations for programs can be used interchangeably, text is just parsed into trees of symbols after all.

This feels like a pretty obvious thing to do: are there programming environments out there that let you switch the view back and forth between textual and nontextual representations, designed to have good aesthetics/readability/editability in both forms?


On the other hand, there's a lot of concepts that can't be articulated clearly in a programming language but are pretty clear if you draw a picture. For, example, specifying the boundaries of critical sections in the system with many classes. This applies to pen & paper diagrams though. No idea whether there's a sane way to turn such diagrams into visual programming tool.


I agree that text is better in long turn. Such as tex is better than word. command line is better than GUI. Yes, I mean "in long turn", otherwise it may be not true.


I've never used uBot Studio, but a lot of people seem to love it's point and click programming.


I've been kicking around the idea of building a visual programming language for about a week, so this article is well timed for me.

Let me throw out a counter hypothesis, please feel free to tell me where I'm wrong.

I can imagine that, when you have to build a complex program out of a bunch of boxes, and it gets to the point where a simple for-loop requires a bunch of boxes, that the whole program would be way, way too complicated.

This is the same problem if you wrote a java app, only it all fit in a single file, and every class of the java frameworks were also defined in that file-- it would be a massive file and damn hard to understand.

So, I think the same solution should work for a visual programming language: Modules (as they're called in erlang) or Classes (in Java etc.)

So, at the highest level, your program would be a small collection of boxes with lines between them. Maybe these are the major features of your app.

If you clicked on a box, you'd be drilling down into that level and you'd see how that feature was built. And it would be composed of a series of boxes- each of which was probably akin to a module or a class.

And if you clicked on one of those boxes, you'd be down into that module or class, and it is also composed of modules or classes.

You could drill down this way, until you ultimately got to the lowest level boxes which are akin to lines of code or components of a line of code in a text based language.

I think that would be visually easy to manage-- you'd only be dealing with a relatively small number of boxes and pipes at a time, and conceptually drilling down and pulling back is someting that I think non-programmers would be able to understand.

(Plus drilling all the way down would be something that programmers want to do, but non-programmers (the presumed target of a visual programming language) might not care about.)

Second hypothesis: If it makes no sense to drill all the way down, then at some point, when you drill down you're dropped into text based code. Thus the higher levels of abstraction could be done visually, but the components themselves could be done with text-- sort of a hybrid approach that might give the best of both worlds.

Thoughts? (and I'd love pointers if anyone knows of a system like this that is open source.)

Also, If I did try to build this thing, it would be my first ever language. I'd love to have a checklist of the basic components of a computer language that I could work form... otherwise I'm afraid I'll implement what I need and probably miss something, or do it wrong. But havent' found any good guides so far.


I've been working on a side project in the area for a while now. I think what you say with the boxes is pretty sensible and similar to how I see it - all I'll add is that its not just the 'bottom' that needs to be text-based. Some things are just best represented by text - even when you are looking at them in high level or abstracted (especially names of things, numbers, human-use things like addresses, etc.), whilst other things really need no textual representation at all, even if we are familiar with them that way (loops and other control strucutres, as well as graphics [say no to hex-based RGB!], etc.).

My advice is if you want to make it - just start. I farted around for a while thinking 'someone must have done something similar before and have some recommendations', but never really found much helpful (just alot of noisy people pointing to 1970s visual programming systems that failed for reasons that aren't really applicable to a modern project).


Have a look at Drakon language http://drakon-editor.sourceforge.net/

It is an algorithmic visual programming language developed for the Buran space project (Russian space project)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: