It's fun to think about useful languages that are not computationally complete. I am writing my PhD thesis about Datalog used as a general purpose programming language, specifically for robots and microcontrollers. And you know what? You don't need to be able to simulate a Turing machine for a lot of tasks. On the other hand, you have guaranteed termination.
Excel (without scripting) uses a terminating (and therefore incomplete) model of computation and it still manages to do a lot of heavy lifting in the real world.
I wholeheartedly agree with the author that declarative languages and Domain specific languages are great, and even if they do not have a (hidden) embedded lisp, they can still be useful.
Once you are in a computationally complete language (and you don't just use a subset of language features that is "easy"), most problems are generally undecidable. Turns out, that's a bad property for static analysis.
If you don't use recursive CTEs in SQL you have a guaranteed-terminating subset. It actually doesn't help any because time to terminate can be effectively unlimited depending on (in any combintaion)
- not having indexes
- a bad execution plan
- an overcomplex query (tend to kill the optimiser)
- bad distribution stats (again -> a bad plan)
- a large dataset
- and maybe others.
Practially speaking... well IDK, how do you reconcile that with your work? It's a real puzzler for me (language theory is an interest of mine, but sql brings home the bacon when I have work) so your input will be very welcome.
>> It's fun to think about useful languages that are not computationally complete. I am writing my PhD thesis about Datalog used as a general purpose programming language, specifically for robots and microcontrollers.
Interesting. What is the general area of your thesis (your field of research that is)? I study approaches that learn datalog (or Prolog) programs from examples.
I have a variant of Datalog with an explicit model of time (like statelog or dedalus) and action atoms/external atoms from HEX (asp). Then I extract partial programs that provably use finite memory and construct finite state machines. Combined with a bit of runtime framework, I can compile to C which I actually run on arduino or lego ev3.
Usually something what I call a "decision core" is embedded in every interactive program. Deciding what to do with input. Sometimes this is just an if-else-tree, sometimes a statistical model, sometimes a logic program. Execution is done in an imperative fashion.
Usually on microcontrollers you don't have enough space to have a control system and a logic program. So I extend LP with IO and compile efficient code.
Partial programs are basically extracted using the Datalog research about parallelism and distribution over components. Queries evaluated in parallel/distribute can be compiled to separate programs. And if the program is bounded, through an abstract execution we can basically generate all possible extentions (parameterized) of the minimal model.
Where the proof can not be obtained, we compile a small embedded deductive database.
I have to look at the stuff you've linked. Interesting and adjacent. Is the github in your profile an appropriate way to contact you? I'll shoot you an email in a few days.
Excel (without scripting) uses a terminating (and therefore incomplete) model of computation and it still manages to do a lot of heavy lifting in the real world.
I wholeheartedly agree with the author that declarative languages and Domain specific languages are great, and even if they do not have a (hidden) embedded lisp, they can still be useful.
Once you are in a computationally complete language (and you don't just use a subset of language features that is "easy"), most problems are generally undecidable. Turns out, that's a bad property for static analysis.