You are right. The diagrams are used as explanations not as the source of the program. But wouldn't it be neat if when you sketch out the state transition in a diagram (how I think about the state transitions), _that diagram_ was the source of truth for the program?
That is the implied point: let's go to places where we already draw diagrams and check if we can elevate them into the program
This can be really tricky to do. I reached the limit of my brain's working capacity designing a priority inheritance system, and sketched the state machine out in a dot file, visualized with graphviz - this worked really well for reasoning through the correctness of the algorithm and explaining it to others. I tried to structure the implementation code to match it and I was able to get pretty close; but the actual states were a bunch of bit-packing and duplicated control flow to get optimal assembly output for the hottest paths. Each one of those changes was easy to reason about as an isolated correct transformation of the original structure in code, but would have been a mess visually.
Did I understand correctly that the additional complexity came because you needed to emit optimal assembly? Or was implementing the logic from the state machine complicated enough?
Designing the state machine was hard. The implementation of that state machine was not that bad, because I'd spent so much time thinking through the algorithm that I was able to implement it pretty quickly. The implementation difficulty was optimizing the uncontended case - I had to do things like duplicate code outside the main CAS loop to allow that to be inlined separately from the main body, structure functions so that the unlock path used the same or fewer stack bytes than the lock path, etc. Each of those code changes were straightforward but if I had faithfully copied all those the little tweaks into the state machine diagram, it would be so obfuscated that it'd hide any bugs in the actual core logic.
So I decided that the diagram was most useful for someone looking to understand the algorithm in the abstract, and only once they had been convinced of its correctness should they proceed to review the implementation code. The code was a terrible way to understand the algorithm, and the visualization was a terrible way to understand the implementation.
From what I've seen when code is generated from formal specs it ends up being inflexible. However, do you think it would be valuable to be able to verify an implementation based on a formal spec?
People do that, and find very tricky bugs. One person did it by line-by-line translating C code into TLA+, another by representing the state machine in Coq and checking for predicates validating it in the source. But I don't think a visual representation of the state machine would have diagnosed the bugs the formal checkers did.
I just realized my previous comment left out what I was trying to say—my bad! I think what I was trying to ask was: would it be possible to generate a formal specification from a graphical representation, and then use that specification to verify the source?
Also thank you for those links! I'll definitely give them a read.
I'm far from an expert in formal verification, I probably should be doing more of it than I do. From my two links, the way I've seen formal verification work is to either translate the code line-by-line in an automated or manual way into a formal language, then check for possible orderings that violate properties you care about; or define a formal model of the state machine, and insert validation of all the transitions in the code.
If you were going to do formal verification from the graphical representation, it would be on the algorithm; namely does it always converge, does it ever deadlock, does it ever fail mutual exclusion. If the goal is for a computer to analyze it, it can be precisely as complex as the source code, so yes. But at that point it's not useful visually for a human to read.
Virtually all diagrams are representations of declarations that are the source of truth. Visual editors just help you edit that, but rarely get close to validating the underlying structure.
For things like programming languages and markdown, users switch between the modes. For something like SVG, users rarely learn or solve problems at the declaration level.
The real questions come with declaration re-usability and comparison. Two pdf's can look exactly the same, but be vastly different declarations, which makes comparison and component re-use essentially impossible.
It turns out, much of the benefit of visual editors is built on the underlying declaration model where it supports user edit/inspection goals.
So I think the point is not to have the visual be the source of truth, but to have more visualization and visual editors for the sources we have.
There are/were excellent visual editors for Java and Apple GUI's that supported round-tripping (mode-dependent source of truth). But we seem to have abandoned them not just because they're old, but because of the extra scaffolding required to do the round-tripping.
So my take-away has been that any visualization must be source-first and source-mainly - with no or minimal extra metadata/scaffolding, as markdown is now. That would mean the implied point is that we should visualize (or otherwise abstract to understand) the source we have, instead of using diagrams as the source of truth.
> Schematix provides diagrams as a dynamic resource using its API. They aren't images you export, they don't end up in My Documents. This isn't Corel Draw. In Schematix, you specify part of your model using a graph expression, and the system automatically generates a diagram of the objects and relations that match. As your Schematix model changes, the results of the graph expression may change, and thus the visual diagram will also change. But the system doesn't need you to point and click for it. Once you've told it what you want, you're done.
What an interesting tool! It's rare to see robust data models, flexible UX abstractions for dev + ops, lightweight process notations, programmatic inventory, live API dashboards and a multi-browser web client in one product.
Do you have commercial competitors? If not, it might be worth doing a blog post and/or Show HN on OSS tooling (e.g Netbox inventory, netflow analysis of service dependencies) which offer a subset of Schematix, to help potential customers understand what you've accomplished.
Schematix is somewhat unique. Direct competitors? -- not exactly, but IT asset managers, DCIM, BC/DR tools, and CMDBs are all competitors to some degree.
Some of our best users are professional consultants who use us for a project which often introduces us to a new customer.
A Show HN would certainly be in order. Thanks for the thoughts!
That is the implied point: let's go to places where we already draw diagrams and check if we can elevate them into the program