Any advice for learning to read the PL math notation? The PL papers I've seen normally take it for granted, so I don't even know what it's called to search for it!
If you're talking about the inference rules used to specify the type system of a programming language, one of Wikipedia's[0] references is[1] which from a quick skim seems adequate. The main takeaway is that the things above the line are the assumptions, and the things below the line is what you conclude. Γ is usually used for typing environments (a map from variables to their types), the turnstile ⊢ can be read as "from the context on the left, it entails the typing information on the right". As a quick example:
Γ ⊢ e1 : Int Γ ⊢ e2 : Int
----------------------------
Γ ⊢ e1 + e2 : Int
Which reads, if e1 has type Int in Γ, and e2 has type Int in Γ, then e1 + e2 also has type Int in Γ. These rules are really bidirectional which is what allows you to perform typechecking. If you have an expression (a + b) of type Int, you can reduce it into two subproblems, typechecking a expecting an Int, and similarly for b.
It was a great refresher as someone who once liked math but hasn't done much of it in ~20 years :) I had seen the blog posts, but there was some "color" in the videos that helped. For example I didn't realize that the fonts sometimes matter! Honestly, I still don't really read the notation, as I haven't had a strong reason to, but I feel it would be useful at some point.
----
For others, I also recommend this 2017 talk by Guy Steele It's Time for a New Old Language
Because even people in the field seem to have problems with the notation. He also was asked about this work a few days ago here and said he was still working on it in the background (being a "completionist"):
FWIW as you know, Oil is more static than shell, and that was largely motivated by tools and static analysis (and negatively motivated by false positives in ShellCheck https://news.ycombinator.com/item?id=22213155)
I would like to go further in that direction, but getting the basic functionality and performance up to par has taken up essentially 100% of the time so far :-(
My use of Zephyr ASDL was also partly motivated by some vague desire to get the AST into OCaml. However I haven't used OCaml in quite awhile and I get hung up on small things like writing a serializer and deserializer. I don't want to do it for every type/schema, so it requires some kind of reflection. And my understanding is that there are a bunch of packages that do this like sexplib, but I never got further than that.
Formulog sounds very nice, so I wonder if there is some recommended way of bridging the gap? For example imagine you want to load enormous Clang AST or TypeScript ASTs into Formulog. The parsers alone are 10K-30K lines of code, i.e. it's essentially infeasible to reproduce those parsers in another language in a reasonable time. And even just duplicating the schema is a pretty big engineering issue, since there are so many node types! I could generate them from Zephyr ASDL, but other projects can't. I wonder if you have any thoughts on that? i.e. to make the work more accessible on codebases "in the wild"
-----
Also FWIW I mentioned this "microgrammars" work a few days ago because I'm always looking for ways to make things less work in practice :)
Thanks! :) We should be very clear that the bulk of the work is Aaron Bembenek's.
I think Formulog would work great for analyzing the shell---as would any other Datalog, though SMT-based string reasoning will certainly come in handy. I don't think it will help you with parsing issues, though. The general approach to static analysis with Datalog avoids parsing in Datalog itself, relying on an EDB ("extensional database"---think of it as 'ground facts' about the world, which your program generalizes) to tell you things about the program. See, e.g., https://github.com/plast-lab/cclyzer/tree/master/tools/fact-... for an example of a program for generating EDB facts from LLVM. Just like real-world parsers, these are complicated artifacts.
Ah OK thanks for the link. Since it depends on commercial software, I don't see a path to trying it (which is fine, because I probably don't have time anyway :-/ )
So are you saying that it's more conventional to serialize relations from C++ or Python, rather than serialize an AST as I was suggesting?
Your blog post mentions ASTs too, so I'm not quite clear on that point. I don't have much experience writing such analyzers, and I'd be interested if there is any wisdom / examples on serializing ASTs vs. relations, and if the relations are at the "same level" as the AST, or a higher level of abstraction, etc.
-----
FWIW I read a bunch of the papers by Yannis because I'm interested in experiences of using high level languages in production:
I did get hung up on writing simple pure functions in Prolog. There seems to be a debate over whether unification "deserves" its own first-class language, or whether it should be a library in a bigger language, and after that experience, I would lean toward the latter. I didn't really see the light in Prolog. Error messages were a problem -- for the user of the program, and for the developer of the program (me).
So while I haven't looked at Formulog yet, it definitely seems like a good idea to marry some "normal" programming conveniences with Datalog!
I'd say it's conventional to reuse an existing parser to generate facts.
The AST point is a subtle one. Classic Datalog (the thing that characterizes PTIME computation) doesn't have "constructors" like the ADTs (algebraic data types) we use in Formulog to define ASTs. Datalog doesn't even have records, like Soufflé. So instead you'll get facts like:
I'm not sure if that's you mean by serializing relations. But having ASTs in your language is a boon: rather than having dozens of EDB relations to store information about your program, you can just say what it is:
As for your point about Prolog, it's a tricky thing: the interface between tools like compilers and the analyses they run is interesting, but not necessarily interesting enough to publish about. So folks just... don't work on that part, as far as I can tell. But I'm very curious about how to have an efficient EDB, what it looks like to send queries to an engine, and other modes of computation that might relax monotonicity (e.g., making multiple queries to a Datalog solver, where facts might start out true in one "round" of computation and then become false in a later "round"). Query-based compilers (e.g., https://ollef.github.io/blog/posts/query-based-compilers.htm...) could be a good place to connect the dots here, as could language servers.
And if you're fine with something much longer than others have suggested, Types And Programming Languages [0] covers the notation, in addition to most of the other knowledge most PL papers take for granted.
EDIT: Thank you all for the references :)