Consider trying redo[0]. It's an idea of D. J. Bernstein (a.k.a. djb) what could already be a good advertising.
Your problem can be solved with make as it pointed by others but I see a wonderful example where redo's target files are pretty clear describing what redo can do.
redo's target files are usually SHELL-scripts but they can be whatever you want if it can be executed by a kernel. `redo-ifchange file1` is a command which waits until file1 have been rebuilt or, by other words, waits until a file1's target file have been executed if it requires.
There are 4 target files to show how to solve your problem --- downloading and merging two files:
curl "http://example.com/foo.json" # redo guarantees that any errors won't update foo.json as it can happen in make world.
bar.json.do file is
curl "http://example.com/bar.json"
After creating these files you could write `redo all` (or just `redo`) and it will create a graph of deps and will execute them in parallel --- foo.json and bar.json will be downloading at the same time.
I'd recommend getting started with a Go version of redo --- goredo[1] by stargrave. There is also a link to documentations, FAQ and other implementations on the web-site.
I'll note that nq's author also has a redo implementation¹. Being generally redo curious I've wondered a few times why their other projects(nq/mblaze/etc) don't use redo, but never actually asked.
What you're looking for is in the class of tools known as batch schedulers. Most commonly these are used on HPC clusters, but you can use them on any size machine.
There are a number of tools in this category, and like others have mentioned, my first try would be Make, if that is an option for you. However, I normally work on HPC clusters, so submitting jobs is incredibly common for me. To keep with that workflow without needing to install SLURM or SGE on my laptop (which I've done before!?!?), my entry into this mix is here: https://github.com/compgen-io/sbs. It is a single-file Python3 script.
My version is setup to only run across one node, but you can have as many worker threads as you need. For what you asked for, you'd run something like this:
$ sbs submit -- wget http://example.com/foo.json
1
$ sbs submit -- wget http://example.com/bar.json
2
$ sbs submit -afterok 1:2 -cmd jq -s '.[0] * .[1]' foo.json bar.json
$ sbs run -maxprocs 2
This isn't heavily tested code, but works for the use-case I had (having a single-file batch scheduler for when I'm not on an HPC cluster, and testing my pipeline definition language). Normally, it assumes the parameters (CPUs, Memory, etc) are present as comments in a submitted bash script (as is the norm in HPC-land). However, I also added an option to directly submit a command. stdout/stderr are all captured and stored.
The job runner also has a daemon mode, so you can keep it running in the background if you'd like to have things running on demand.
Installation is as simple as copying the sbs script someplace in your $PATH (with Python3 installed). You should also set the ENV var $SBSHOME, if you'd like to have a common place for jobs.
The usage is very similar to many HPC schedulers...
I've used (and installed) PBS, SGE, and SLURM [1]. Most of the clusters I've used recently have all migrated to SLURM. Even though it's pretty feature packed, I've found it "easy enough" to install for a cluster.
What is the sales pitch for OAR? Any particularly compelling features?
I imagine in theory Snakemake, which handles dependency graph resolution , could be used to compute dependencies, and its flexible scheduler could then call nq.
OTOH, if just working on one node, skip nq and use Snakemake as the scheduler as well.
I guess some slight tweaks for task persitance and a CLI wrapper for it could let you achieve this (although I don't leverage Ractors so no true parallelism yet).
Anyway, it still does not have an "official" release, nor a stable API, although the code works well and it's fully tested, as far a I can tell. I might consider providing such wrapper myself in the future as I can definitely see it's utility, but time is short nowadays.
Something like