I really like Z3 for showing that my weird bitwise hacks (e.g. [0]) are correct -- it's especially useful that I can show not only that a function is correct, but also that it approximates the correct answer to within whatever bounds I like. Plus, I can put it in a script, and shove it into the docs and tests easily. (Showing that the code actually corresponds to the Z3 equations needs symbolic execution, which I haven't set up yet though...)
I don't think it specifically states that this is z3-python (not a nitpick) which has an added bonus in that you can handle the SSA requirements if you're allocating inside a finite loop in the higher level language.
I was stuck on that at one time in the past, and then realized that not all the python code is in the scope of the solver and hence helps you automate those state additions.
To explore Z3 projects, I sometimes just browse the tags on Github [0]. I also am not aware of a better way to get an overview. However, I know Z3 is used a lot in industry for verification and resource allocation tasks.
Personally, I have made extensive use of Z3 to find bugs in domain-specific language (DSL) compilers [1]. You can use Z3 to encode the source code before and after a compiler optimization as a logic formula and then check whether the formulae are equivalent. If they are not, there is likely a semantic bug in the transformation pass of your compiler, meaning you have introduced a subtle logic mistake.
This style of verification really works best if your programming language is sufficiently restricted (not Turing-complete) or if you pick isolated segments in your code (e.g., functions). If that is the case, these methods can be extremely effective. We have found many semantic bugs in a short time period. In our case, Z3 was not even the bottleneck, performance-wise. It was our language interpreter. And because the language is sufficiently restricted, we can even run our tool as part of the compiler CI. So whenever a pull request is made, we can check whether it may lead to subtle issues in program translation.
Another useful aspect of using Z3 to represent a programming language is that you can also generate test cases for your programs. As far as I know, the Rosette [2] framework is intended to make all of this more approachable.
Here [1] is a cool project that uses Z3 to make graphics specified by constraints.
I remember reading an article about someone using Z3 to optimize the placement of cutout drawings on a piece of paper (kind of like a spritesheet) and optimizing for the smallest possible area in order to reduce the use of paper/materials. However, I can't seem to find the article anywhere.
One more use of z3 is as one of the available solvers for a tool called gnatprove [2], which is used to prove/verify the correctness of programs written in the SPARK subset of the Ada programming language. SPARK is used to program safety-critical aeronautical/military projects.
"practical uses" tends to mean whatever the speaker wants it to mean, so no guarantees, but i think it covers a number of small useful tasks, in addition to many of the common code golf type things.
angr (http://angr.io/) uses Z3 heavily to perform concolic execution, which lets you do things like load up a block of code or an entire program and ask, symbolically, questions like “to get to this point in the binary, what kind if input is necessary?” We use it a lot for CTFs (“the input is of the format ‘flag{[16 ASCII characters]}’, tell me which ones result in the output including the win text”) and security research (“we ran this program symbolically and can prove that this loop accesses only these addresses on all possible executions”).
The XData project[1] at IIT Bombay is an interesting use case. The problem is to figure out whether the query a student wrote is semantically equivalent to that written by the professor. There is too much variation and many syntactic differences between different databases to do equivalence testing using just syntax analysis.
Instead, Z3 is used to generate a small test data set, given the schema and the query. The core idea is to make sure the data set smokes out any mutation to the query. That is, if I use a join when the prof intended outer join to be used, or '<" instead of '<=', Z3 is used to generate a data set that will yield different results for the original and mutated queries.
Once a sufficiently robust data set is created that covers (an extensive list of) common errors, it can be used to vet student queries automatically. Interestingly, the system can award partial marks as well.
I used it to do some tax optimization around ISOs at a newly public company (https://gitlab.com/mbryant/taxoptimizer/-/blob/master/amt.py). It gets pretty slow when you try to look more than a few years out, so I'm guessing writing some optimization logic by hand would've made more sense. Using Z3 here definitely beat me trying to do this on paper like I've seen other people do!
It's more likely to be used by someone who then implements a practical tool on top of it. I used it to prove relational properties of WASM programs, which is useful to prove security properties like constant-time or differential privacy. You can also use it to prove program equivalence, which I think would be neat for testing a compiler with.
Art of Computer Programming Fascicle 5 and Fascicle 6. Maybe Volume 4A (aka: Fascicles 1 through 4), but Volume 4A is more about "regular" combinatorics (grey code, LSFRs, etc. etc.) which don't need a complicated solver. Still, having Volume 4A as reference material is useful when reading Fascicle 5 and 6 (since combinations / permutations / partitions / trees / graphs come up with regularity in Fascicle 5 and 6).
Fascicle 5 discusses "backtrack programming", and has some practical applications of that. Yeah yeah, you get N-Queens (probably no practical application), but also comma-free codes (obvious applications to communications / framing). Overall, Fascicle 5 builds up to the covering problem (https://en.wikipedia.org/wiki/Covering_problems) and practical applications of "covering problems with color" (a new category of problems proposed by Knuth: combining coloring problems and covering problems in one specification. Because his "Dancing Links + Algorithm X" seems to solve covering problems with color very efficiently).
Volume 4A (which contains what was formerly known as Fascicles 1 through 4) contains a huge chapter on bitwise hacks. Which while solved in Volume 4A, "could be solved" with an SMT / SAT solver like Z3.
Fascicle 6 is Knuth's dedicated section on SAT solvers. Its on my todo list... I haven't really done much with the book yet. I'll probably get to it after I finish Fascicle 5 (which seems more applicable to what I'm trying to do)
--------
All of these "puzzles" are probably NP complete, and therefore "translate" to each other very easily. Covering problems are provably NP complete for example: and therefore can be solved with backtracking (aka: Dancing Links on Algorithm X), or an SAT solver like Z3.
SAT specifications (which Z3 solves directly) can also be converted into a covering problem, and therefore solved with Dancing Links / Algorithm X.
Having both techniques "in your pocket" so that you can pick-and-choose the right solver for the right problem (or maybe through the use of experimentation) is probably best. Unless someone solves P =/= NP any time soon, this ad-hoc methodology of "trying different techniques until something works" is our best bet at practically solving "puzzles" like traveling salesman (aka: airline travel), covering problems (aka: scheduling), integer optimization / functional equivalency of bitwise hacks (probably used in theorem proving / Verilog synthesis of circuits) or whatever else pops up in the real world.
--------------
A lot of these SAT / Backtracking problems are over graphs. Practical applications, like graph drawings (think the "dot" program: how to arrange a graph if it is planar... or if it is non-planar, how to arrange the graph such that it has the fewest number of crossings) is a combinatorics problem. So applications pop up in very unusual places: I'd argue that graph layout is a UI problem for example (a planar graph is easier to comprehend: or at least a graph with the fewest number of crossings)
--------
A lot of these problems can be solved with "less powerful" techniques: maximum flow is itself a solved problem for example with numerous applications. Maximum-flow is "less than NP-complete" problem, but one that you might reach for Z3 unnecessarily.
Similarly: planar graph checking is solved and "less than NP-complete" IIRC. But you'll only really know that if you spend a lot of time researching graph theory.
You really want to use Z3 if you have suspicions that your problem is truly NP complete (or if you suspect is NP complete), and that the exhaustive search of all possibilities is your only option. Z3 is then useful because people have worked very, very, very hard on discovering "less-than NP complete" optimizations to subproblems, and can automatically solve these subproblems efficiently.
Essentially though, you can model fixed width integers signed/unsigned using that and make assertions on things like integer overflow or change in signed-ness.
Uwaterloo did some work to abstract Strings ontop of BitVec so you can include them in your solver state too, worth a look to people that like this content https://z3string.github.io/
This post reminds me that I've been wanting to try out ZetZ[0]. It incorporates Z3 into a high-level programming language, and seems to do a lot of what the post talks about automatically.
[0]: https://doc.stahlos.com/kernel/notebooks/hashtables.html#est...