They had me going in the Nock spec until "We should note that in Nock and Hoon, 0 (pronounced "yes") is true, and 1 ("no") is false. Why? It's fresh, it's different, it's new. And it's annoying. And it keeps you on your toes. And it's also just intuitively right."
There is one ground truth, origin, fixed point like the North Star: zero. There are an infinite number of possible falsehoods: all nonzero numbers. But Urbit chooses 1 as the canonical false.
But on POSIX systems, there's a reason for that. There's only one success (Since "success" should do the same thing every time), but many types of errors which can be indicated by the return value. You could argue that 1 should be success, and >1 should be failure, but that's a minor quibble.
Conversely, here it's just because "it's different". I feel that this is a bit if a shame - some of the other parts of the project appear quite interesting, but making fundamental decisions in downright wrong ways just to mess with expectations comes across as silly to say the least. Why deliberately increase the learning barrier and drive people away?
I absolutely understand your reaction, but, believe me or not, I remember myself wondering as a child why it is opposite. That time it seemed to me completely intuitive and natural that 1 should be "false" and 0 — "true".
However, years later I was introduced to boolean algebra, where 1 should be true and 0 — false, if we want multiplication to be "and" and addition "or". And it feels right, because intersection of two sets is (intuitively) multiplication, and joining — addition.
So, yeah, it doesn't seem like a good idea to me as well after all these years.
Compare to probability. Probability of element being in one set AND another distinct set is multiplication of two probabilities (which are, in reality, actually defined by sets).
"Pairs" (and, respectively, cartesian product) is a bit more complex, and not so much related to boolean algebra deduction concept.
x ∈ (A ∩ B) = (x ∈ A) * (x ∈ B) if "x ∈ A" equals 1 when x is in the
set A, 0 when x is not in the set A. Using "indicator functions" like
this also gives you a nice formulation for probability and
integration, etc. that falls apart if you use 1 to represent x ∉ A.
edit: I should add that I'm not claiming "you can't build measure theoretic probability from this formulation of booleans" is a strike against the project. Just addressing the math question.
I don't see how you lose boolean algebra, you just need to flip * and + in your equations.
x ∈ (A ∩ B) = (x ∈ A) + (x ∈ B) (and)
x ∈ (A ∪ B) = (x ∈ A) * (x ∈ B) (or)
which is useful when you define integrals and expectations:
E g(Y) = ∫ g(y) f(y) dy = ∑ᵢ g(cᵢ) P(y ∈ Aᵢ)
where Y is a random variable with density function f. Any integrable
function can be approximated as the limit of step functions, so this
is a well-behaved way to get a general theory of integration.
Of course, one could replace (y ∈ Aᵢ) with 1 - (y ∈ Aᵢ) if one wanted
to use "0" to represent the event (y ∈ Aᵢ) and "1" to represent its
complement and not affect the truth of the math, but then there will
be lots of termf floating around just to convert the notation into the
terms that you need for the math.