I agree, but what's even better than a specification is an executable specification. If we see a decision table as data (see discussion below) then it can be both -- no chance for the implementation to go out of sync with the spec, and "don't repeat yourself".
Good specifications are often written at a level that generating executable code from them is often undecidable and almost always intractable in practice. What would really be nice -- although we have not been able to achieve even that at scale yet -- is a formal refinement between implementation and specification (i.e. you can't execute the specification, but you can ensure it's in sync). There are, however, various techniques that can do so approximately [1]. But I believe that even a good spec without any formal tie to code (which we certainly can do) gives you 90% of the value of a spec that is formally tied to the code (which we don't know how to do just yet).
> Good specifications are often written at a level that generating executable code from them is often undecidable and almost always intractable in practice.
What about modern dependently-typed meta languages like Idris and program synthesizers like Nadia Polikarpova's work on Synquid?
We don't know how to scale those [1]. It's very easy to specify a simple sorting function in Idris/Java with JML and prove it correct. It's a whole other matter to specify the global properties of an entire system, and verify them all the way down to the small bits of code. Various code synthesis tools are even more limited, as they tend to rely heavily on fragile automated reasoning tools like SMT solvers, although both are orders of magnitude away from being able to handle real applications that I think that comparing them to one another is a bit beside the point.
The biggest programs ever written and verified with deductive proofs -- be it dependently typed languages or any verification based on deduction, correspond at most to ~10,000 LOC of C, taken years of work by academic experts, and had to be significantly simplified to make proofs easier (i.e., they had to use only the simplest data structures, etc.). In contrast, business applications are 2-3 orders of magnitude bigger.
[1]: And when I say "we don't know" I mean that it is possible that some time in the next 2-5 decades we may be able to do something, but it's also possible that there are fundamental complexity limitations that mean that this particular path will never lead to something similar to what we may imagine it could.
I've been thinking a bit about model driven development. I think what you say is true, but at a bare minimum the specification should be machine readable.
Take the IRC RFC, for instance. A lot of it is amenable to code generation, but machine-parsing the spec is so hard (I know.. I tried, many moons ago). I feel like a big chunk of that spec could be written in such way that a big part of an IRC client or server could be generated. For other parts, a generator could at least create boilerplate to be filled by a human and maybe some automatic tests.
> but at a bare minimum the specification should be machine readable.
I don't think it's the bare minimum, as an informal specification is far better than none at all, but I agree that a formal specification is better -- at least in the more complicated cases -- as it forces precision and also allows the use of tools that could check it in various ways.
> Good specifications are often written at a level that generating executable code from them is often undecidable and almost always intractable in practice [...] (which we don't know how to do just yet).
Sure, but I'm talking about this article about decision tables. We don't have to spec the entire program in an executable way. We can just take part of the logic, encode it as a decision table, and declare that this is the spec. Then we read it in to the program and execute it. Nothing intractable about it! (Or if there is, then no self-consistent spec is possible in the first place.)
Encoding a decision table is the boring, easy, part, even if you do it in assembly. But that's not what the post is about. It's about the hard part, which is figuring out what the decisions should be. In many, many cases (most cases I would say), writing the code is the easiest, least interesting part of developing software.
But I totally agree! We shouldn't even write code for a decision table, because it's generic code which doesn't need to be adapted at all. Just use a decision table implementation off the shelf.
But I don't agree that encoding the decision table is boring and easy -- if the business logic is complex enough to require a spec. The good news is that now the table itself is all we need -- no self-written code, and no separate spec.
I once had an argument about that. A very smart colleague of mine claimed to have specified a very complete natural language tense system in Mercury, sort of hard-coded into the language if I understood him correctly. He claimed that it was way better than what general semanticists had come up with and that the use of logic in semantics is totally superfluous.
My point was that it doesn't matter how good his system is, as long as he doesn't separate declaration from implementation, it will be of almost no use. Nobody will be able to run his Mercury program in 100 years from now. Then again, it's not possible to run my Higher Order Logic with Henkin models and Categorial Grammar out of the box either.
So I'm still not sure who had the better argument. Maybe if researchers could agree on a common programming language, we could indeed dispense with descriptive formalisms?