Hacker News new | past | comments | ask | show | jobs | submit login

You haven't countered parent's point at all. You could've just as easily said the common, subset of SQL could be implemented extremely different in SQL Server, Oracle, Postgres, etc. Therefore, declarative SQL has no advantages over imperative, C API's for database engines. Funny stuff.

Let's try it then. The declarative, build system has a formal spec with types, files, modules, ways of describing their connections, platform-specific definitions, and so on. Enough to cover whatever systems while also being decidable during analysis. There's also a defined ordering of operations on these things kind of like how Prolog has unification or old expert systems had RETE. This spec could even be implemented in a reference implementation in a high-level language & test suite. Then, each implementation you mention, from rdmake to hdmake, is coded and tested against that specification for functional equivalence. We now have a simple DSL for builds that checks them for many errors and automagically handles them on any platform. Might even include versioning with rollback in case anything breaks due to inevitable problems. A higher-assurance version of something like this:

https://nixos.org/nixos/about.html

Instead, we all use make. Make allows for arbitrary code and configurations to be executed, so no matter what configuration problems you have, we can all use a familiar build tool that everybody knows. That's the power of generic, unsafe tools following Worse is Better approach. Gives us great threads like this. :)




From the perspective of security, make is not great, but there's always a more complicated build, requiring either generic tooling, or very complex specific tooling. This is why the JS ecosystem is always re-inventing the wheel. If you design your build tool around one abstraction, there will always be something that doesn't fit. What will happen if we build a tool akin to the one I described is that it will grow feature upon feature, until it's a nightmarish mess that nobody completely understands.

>You could've just as easily said the common, subset of SQL could be implemented extremely different in SQL Server, Oracle, Postgres, etc. Therefore, declarative SQL has no advantages over imperative, C API's for database engines. Funny stuff.

No, that's not my point, my point is that a build tool that meets parent's requirements would necessarily be non-generic, and that such a tool would suffer as a result.

>Instead, we all use make. Make allows for arbitrary code and configurations to be executed, so no matter what configuration problems you have, we can all use a familiar build tool that everybody knows. That's the power of generic, unsafe tools following Worse is Better approach. Gives us great threads like this. :)

Worse is Better has nothing to do with this. Really. Make is very Worse is Better in its implementation, but the idea of generic vs. non-generic build systems, which is what we're discussing, is entirely orthogonal to Worse is Better. If you disagree, I'd reccomend rereading Gabriel's paper (that being Lisp, The Good News, The Bad News, And How to Win Big, for the uninitiated). I'll never say that I'm 100% sure that I'm right, but I just reread it, and I'm pretty sure.


"No, that's not my point, my point is that a build tool that meets parent's requirements would necessarily be non-generic, and that such a tool would suffer as a result."

A build system is essentially supposed to take a list of things, check dependencies, do any platform-specific substitutions, build them in a certain order with specific tools, and output the result. Declarative languages handle more complicated things than that. Here's some examples:

https://cs.nyu.edu/~soule/DQE_pt1.pdf

I also already listed one (Nix) that handles a Linux distro. So, it's not theory so much as how much more remains to be solved/improved and if methods like in the link can cover it. What specific problems building applications do you think an imperative approach can handle that something like Nix or stuff in PDF can't?


...Nix actually uses SHELL for builds. Just like make. It's fully generic.

http://nixos.org/nix/manual/#sec-build-script


Didn't know that. Interesting. It looks like an execution detail. Something you could do with any imperative function but why not use what's there for this simple action. Nix also manages the executions of those to integrate it with their overall approach. Makes practical sense.

"It's fully generic."

It might help if you define what you mean by "generic." You keep using that word. I believe declarative models handle... generic... builds given you can describe about any of them with suitable language. I think imperative models also handle them. To me, it's irrelevant: issue being declarative has benefits & can work to replace existing build systems.

So, what's your definition of generic here? Why do declarative models not have it in this domain? And what else do declarative models w/ imperative plugins/IO-functions not have for building apps that full, imperative model (incl make) does better? Get to specific objections so I can decide whether to drop declarative model for build systems or find answers/improvements to stated deficiencies.


That wasn't what the original post by ashitlerferad was calling for. I have not problem with generic declararive-model build systems that can be used for anything. However, the original call was for build systems which don't require arbitrary code execution. A generic build system must deal with many different tools and compilers, and thus REQUIRES arbitrary code execution: Somewhere, there's got to be a piece of code telling the system how to build each file. And if you don't build that into the build system proper, you wind up either integrating everything into core, or adding an unweildly plugin architecture and winding up like grunt/gulp and all the other node build systems. Or you could just allow for arbitrary code execution, and dodge the problem all together. This is possible in a declaritive system, but it's a lot harder to do, and means at least part of your system must be declarative.


It seems some kind of arbitrary execution is necessary. I decided to come back to the problem out of curiosity to see if I could push that toward declarative or logic to gain its benefits. This isn't another argument so to speak so much as a brainstorm pushing envelope here. Could speculate all day but came up with a cheat: it would be true if anyone had replaced make or other imperative/arbitrary pieces with Prolog/HOL equivalents. Vast majority of effort outside I/O calls & runtime itself would be declarative. Found these:

http://www.cs.vu.nl//~kielmann/papers/THD-SP-1991-04.pdf

https://github.com/cmungall/plmake

Add to that Myreen et al's work extracting provers, machine code and hardware from HOL specs + FLINT team doing formal verification of OS-stuff (incl interrupts & I/O) + seL4/Verisoft doing kernels/OS's to find declarative, logic part could go from Nix-style tool down to logic-style make down to reactive kernel, drivers, machine code, and CPU itself. Only thing doing arbitrary execution, as opposed to arbitrary specs/logic, in such a model is what runs first tool extracting the CPU handed off to fab (ignoring non-digital components or PCB). Everything else done in logic with checks done automatically, configs/actions/code generated deterministically from declarative input, and final values extracted to checked data/code/transistors.

Hows that? Am I getting closer to replacing arbitrary make's? ;)


...I'm not sure I totally understand. Here's how I'd solve the problem:

Each filetype is accepted by a program. That program is what we'll want to use to compile or otherwise munge that file. So, in a file somewhere in the build, we put:

  *.c:$CC %f %a:-Wall
  *.o:$CC %f %a:-Wall
And so on. The first field is a glob to match on filetype,%f is filename, %a is args, and the third field is default args, added to every call.

The actual DMakefile looks like this:

  foo:foo.c:-o foo
  bar.o:bar.c:-c
  baz.o:baz.c:-c
  quux:bar.o baz.o:-o quux
  all:foo quux
Target all is run if no target is specified. The first field is the target name. The second field is list of files/targets of the same type, to be provided to compiler on run. It is assumed the target and its resultant file have the same name. The last field is a list of additional args to pass to the compiler.

This is something I came up with on the spot, there are certainly holes in it, but something like that could declaritivise the build process. However, this doesn't cover things like cleaning the build environment. Although this could be achieved by removing the resultant files of all targets, which could be determined automatically...


There you go! Nice thought experiment. Looks straight-forward. Also, individual pieces far as parsing could be auto-generated.

Far as what I was doing, I was just showing they'd done logical, correct-by-construction, generated code for everything in the stack up to OS plus someone had a Prolog make. That meant about the whole thing could be done declaratively and/or c-by-c with result extracted with basically no handwritten or arbitrary code. That's the theory based on worked examples. A clean, integration obviously doesn't exist. The Prolog make looked relatively easy, though. Mercury language make it even easier/safer.


Well, thanks for clarifying. That makes a lot more sense.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: