At least since 1935 [1, 2] people have been trying to formalize all of mathematics with set theory. I always thought this was fairly interesting, similar to how the real numbers can be modeled in a language using dependent types like Coq.
However, I always found Hoare logic, and its concurrent extension Rely--Guarantee from Jones, to be quite easy to understand. The more interesting part is how to do this automatically for a user. Abstract interpretation is one way to do this, but this necessarily requires mapping some programming language to your mathematical model. However, determining the formal semantics of mature programming languages, even C, is still open research (e.g., see papers on Semantics in PLDI [3] 2015).
At least since 1935 people have been trying to formalize all of mathematics with set theory.
Cantor invented set theory for this purpose in the 1870s. By 1935, Goedel's incompleteness theorems had shown the limitations of this (and any other) foundation of mathematics.
Hoare logic is not a foundation of maths, but an approach towards specification and verification of programs. As such it is in the same space as Curry-Howard based programming languages like Idris and Agda with dependent types. The main differences between the two are that (1) Hoare logic is not restricted to constructive reasoning and (2) Hoare logic is not integrated with the programming language, so programming and proving are distinct activities. Depending on your philosophical outlook, either (1) or (2) or both can be seen as major advantage or disadvantage.
how to do this automatically
What do you mean by "this"? Automatically specifying programs semantics, or automatically proving program properties?
determining the formal semantics ... is still open research
This is somewhat misleading. Every compiler gives a perfectly formal semantics to the programming language it compiles. What you probably mean is something like nice abstract accounts of memory models for C that at the same time capture all the optimisations modern C-compilers want to do, while still being implementable on existing CPUs. That is indeed an open problem [1].
What you probably mean is something like nice abstract accounts of memory models for C that at the same time capture all the optimisations modern C-compilers want to do, while still being implementable on existing CPUs. That is indeed an open problem [1].
True. Although CompCert [1] is a nice effort toward that goal: a proved C compiler that covers almost all C99 with many optimizations implemented and proved sound.
CompCert uses tame optimisations, and compiles to mainstream architectures nothing exotic. Most of all, CompCert does not deal with concurrency as far as I'm aware. Concurrency is where memory models matter.
The problem with memory models is not so much verifying it in a mechanised way, but inventing something suitable at all.
The problem with memory models is not some much verifying it in a mechanised way, but inventing something suitable at all.
Agreed.
I'm aware of some work done a few years ago by people working on weak memory models and extending CompCert with some concurrency primitives: http://www.cl.cam.ac.uk/~pes20/CompCertTSO/doc/
Thanks for the interesting history on set theory. However, if you read [1] or the first chapter of [2] in my reply you'll see that the strides to formalize mathematics started in 1935 so something does not add up.
I did not intend to say Hoare logic being a foundation of mathematics, this was unclear in my post since I just abruptly changed topics. However, i would be delighted to read someone attempting to do just that.
What I intended to say is that the posted paper is basically a formulation of Hoare logic using set theory.
By this I meant both specifying program semantics and proving program properties. As in, the automated construction of Hoare pre/post annotations or for concurrency rely--guarantee annotations. To me, this is bringing languages without depends types closer to similar kinds of correctness guarantees.
Thanks for the clarifications on semantics. You hit the nail on the head.
I went ahead and skimmed the first chapter of [2] (again; I own it) and don't see it justifying that this stuff began in 1935. Indeed, it suggests, e.g., that a major difficulty of the program to formalize all of math in set theory, Russell's Paradox, was discovered in 1901 and then immediately notes that set theory itself was begun by Cantor decades before.
It also then mentions how Set Theory was formalized and fixed through the introduction of formal classes (which fixed the size concerns discovered by Russell by outlawing them---and actually Russell himself proposed a system which did exactly this, his "rammified hierarchy" almost immediately after discovering his paradox; it is the foundational model of the Principia Mathematica which was published in three volumes in 1910, 1912, and 1913) and that these were in use by the mid 1920s.
In fact, the first mention of dates in the 1930s occur in Hilbert's finitist attempts to prove consistency and Godel's destruction of that program.
What you're probably referring to is Bourbaki's work. This wasn't an initial attempt by any means but instead a broad reaching program to popularize the methods of sets that had been previously established. This was the moment of catastrophic phase change when all old mathematical concepts were suddenly shown to be suspect if they could not be axiomatized set theoretically—so certainly when a bulk of work occurred.
But all said, it is certainly the case that set-theoretic foundations of mathematics was a 1870-1930 affair.
However, I always found Hoare logic, and its concurrent extension Rely--Guarantee from Jones, to be quite easy to understand. The more interesting part is how to do this automatically for a user. Abstract interpretation is one way to do this, but this necessarily requires mapping some programming language to your mathematical model. However, determining the formal semantics of mature programming languages, even C, is still open research (e.g., see papers on Semantics in PLDI [3] 2015).
TLDR: verification is hard.
[1] https://en.wikipedia.org/wiki/Nicolas_Bourbaki
[2] Topoi, the categorial analysis of logic. Goldblatt, Robert. http://digital.library.cornell.edu/cgi/t/text/text-idx?c=mat...
[3] http://conf.researchr.org/track/pldi2015/pldi2015-papers#pro...