Principia was an attempt to derive math from logical ground up, motivated by ideas about the foundations of mathematics and logic (wrong ones, it turned out). Knuth is more like A Magical Mystery Tour of Algorithms, wherein Knuth is both tour guide and man behind the curtain.
"Wrong" is too strong. The fundamental bases it used are not generally used today, but it was the first of its kind and inspired much. Many of its details are still fine.
If you are interested in the underlying goal of Principia Mathematica, I urge you to check out the Metamath Proof Explorer (MPE):
https://us.metamath.org/mpeuni/mmset.html
By itself, Metamath doesn't have built-in axioms. MPE uses Metamath to first state a small set of widely-accepted axioms, namely classical logic and ZFC set theory, and then proves a tremendous amount of things, building up to a lot of mathematics from careful formal proofs that rigorously prove every step.
Some things cannot be proven, but that doesn't mean that proof is dead.
Yes, it was a tad tendentious, but I don’t think anyone really buys the logicist program anymore.
Don’t get me wrong, PM is marvelous and there’s no gainsaying its enormous historical impact.
If your characterization of Metamath is correct, I don’t think that’s in the spirit of PM at all. One of the major problems PM had was the rejection of (what later became) the Axiom of Choice in favor of Russell’s convoluted theory of types. Indeed, that set theory (ZFC) is needed to get the rest of the way R&W were trying to go is one of the signal failures of the logicist program behind PM.
If you believe the big advantage of Principia Mathematica was that it starts with a very few axioms and then manages to formally and exactly prove many things, then MPE is a worthy successor. I'm in that camp.
However, if you think the main point of Principia Mathematica was the very specific set of axioms that they chose, then that's different. The PM authors chose to use a "ramified" theory of types, which is complex. It does have sets, it's just not ZFC sets. Few like its complexities. Later on Quine found a simplification of their approach and explained it in "New Foundations for Mathematical Logic". There's a Metamath database for that "New Foundations" axiom set as well (it's not as popular, but it certainly exists):
https://us.metamath.org/index.html
Strictly speaking that's true for any system that can handle arithmetic (as proved by Goedel). You can show an inconsistency, but you can't prove consistency. No one's found an inconsistency.
Yes, I was too loose with me words again. I was gesturing at relative consistency (especially, at least, when compared to the lack of serious doubt about ZF/C’s consistency), as well as the work of folks like RB Jensen and Randall Holmes.
The biggest problem NF has is, as usual, a social one: there just ain’t a lot of people working on, or interested in working on, NF compared to other set theories.
Also check out the Principia Rewrite project, which aims to use the interactive theorem prover Coq to ensure each proof step is a valid step according to Principia’s axioms and that no steps are skipped, even by accident.
It's a bit extreme imo to call Principia wrong. It isn't wrong so much as it will never completely succeed. These are two different things. The logical perspective presented in Principia is still sound and a relatively useful framework for understanding most of mathematics, and it's a monumental and impressive piece of work. I find it hard to believe anyone who has read it wouldn't leave with a better overall comprehension of what exactly it is that we are doing when we do mathematics.
I didn’t say PM was wrong. I said the ideas behind it were—-although I should’ve said idea (singular) insofar as I was thinking in particular of the strong form of logicism motivating Russell.
My understanding of the flaws of PM is in its attempts to avoid self-reference, which was sort of folly from the beginning as proven by Godel. I learned this from I Am A Strange Loop, and I'm not sure how accurate it is historically. But Godel's Incompleteness Theorem is one of the most interesting things I've ever read about.
The principal flaw of PM if you were to read it now is that it is an evolutionary dead end.
The elementary vernacular foundations of modern mathematics is (more or less) naive set theory; the starting tools of serious foundational work (as arcane as it is even within maths as a whole) are logic and more rigorous set theory, perhaps with some computability mixed in; the main tool of a mathematician who wants to reason in great generality is category theory (with some handwaving in the direction of the previous point about universe hierarchies and whatnot). There are some signs of convergence between these (and of mainstream mathematicians starting to once again take foundations seriously), but at the basic level those are what you’ll be dealing with.
None of them existed in their current form at the time PM was written, even logic (no Kripke semantics! no forcing! and no Gödel of course). Some did not yet exist at all. Some changed quite drastically in direct response to PM. And of course PM is the origin of (embryonic) type theory, which is the inspiration of the unified approach I referenced above. So as a historically important text, sure, if that’s what you want, but as a gateway to understanding more interesting maths it’d be terribly inefficient.
In that respect TAoCP was uniquely lucky. It was also a self-obsoleting book: it ceased to be comprehensive months after it was published, exactly because it told you everything there was to know about algorithms to date. Yet none of the stuff that’s in it is itself obsolete, there’s just immesurably more stuff now. PM, on the other hand, was attempt at “rationalization” in the 19th-century sense, and mostly a failed one except for serving as fertilizer for all of the later ones.
It wasn't a folly from the beginning because it seemed like it could be done when Russel and Whitehead started on the PM. It was only after it was published that Gödel proved it to have been a folly.
In some ways, it's the opposite: Knuth is super practical and any theory is only in the service of whatever is actually relevant to writing better programs and getting answers. See e.g. this older comment of mine comparing CLRS and TAOCP: https://news.ycombinator.com/item?id=21924265 — where others may say O(lg n), Knuth will say that a trie search inspects an average of lg N + 1.33 bits for a successful search and lg N − 0.11 for an unsuccessful one.
But then again, as a result of all this work from him, pulling together and categorizing and cataloguing and analyzing all the disparate ad-hoc “tricks” that programmers had come up with, Knuth basically put (this area of) computer science on good foundations and made it a respectable academic discipline. So in that sense he successfully executed a Principia-like project.
To put it another way, the computer programming/algorithms world is generally divided into hackers who derive satisfaction from elegant ways of bending computers to their will, and theorists who are happy to talk asymptotics, treat their problems abstractly as a branch of mathematics, and often never write a single program. What sets Knuth apart is that he straddles both words: is firmly a hacker at heart, but is able to bring prodigious theoretical skill to bear on the area.
It continues to surprise me that Principia Mathematica (PM) still gets mentioned so regularly in discussions related to computer science topics. As far as I can tell, PM was one of the least influential works in any branch of mathematics or logic or philosophy or computer science. It is must be one of the least-read books (3 volumes, 1994 pages) ever written. PM's sole claim to fame is that it was mentioned in Kurt Goedel's famous paper "Ueber formal unentscheidbare Saetze der Principia Mathematica und verwandter Systeme I". The emphasis strongly is on "verwandter Systeme" - related systems, of which PM is just one example.
How can {deliberately doing harmful things for a desired harmful outcome} and {doing whatever things with lack of judgment and disregard to consequences at all} be the same thing? In what situations?
The Linux system calls used by cosmopolitan work fine on Android provided they aren't blocked by seccomp policy. Some of those blocks make sense, others not so much. Yesterday I discovered I could use all the stat variants but not statx.
I believe the features provided by standard C libraries are well covered by the set of allowed system calls.
> We're not quite at the level (yet) of feeding a whole codebase to an LLM and making it add features or make changes while understanding the big picture of the problem being solved
We are one step away from that. All we need is a more advanced form of fine tuning.
Personal experience building infrastructure and an autoscaler for these things (which performed horribly due to API lag) while having weekly meetings with our GitHub rep to get the --ephemeral flag in (took about a year longer than promised). Sometimes the exit code would be 0 when using `--once`, and sometimes it would not be. Sometimes it'd also be 0 if the token somehow didn't work and the worker couldn't even register no matter how often you restarted it (of course with a cryptic Azure IAM error code). Either way, we eventually just decided that throwing away the machine if the runner exists for any reason was safest.
The linked issue is not about random exit codes though? And also doesn't really seem to support his assertion that the maintainers don't know how it works, but I'll admit I'm not very familiar with GitHub Actions.
she did not, the exit code story is a different thing, though one I told on HN before. basically i spent half a year or so with writing infra and autoscaling for action runners on GCP, and there's absolutely nothing there i'm proud of. the pieces just didn't fit together. and weirdly enough, the majority of the blame isn't even with GCP and their broken image snappotting, slow provisioning times and asinine API design.