Yeah, you can look in the contributors' dashboard, they stopped around 2019. As any documentation, its fate is to wither and fall into decay. But hey, he got a book out of it.
That's a pretty narrow view of software too. The nature of building software systems is not always towards being able to mass-produce them. What would be the point of copying and distributing 10K copies of my script that runs migrations for one legacy database in a very specific way?
It's complete nonsense. "The total number of moves in the game is equal to the number of stones initially in the pile, which is 5000."
Similarly on the Martian question "If we transform 1 red and 1 green Martian, we get 4 blue Martians. This changes the parity of red and green Martians from even to odd, and the parity of blue Martians from even to odd." - is complete nonsense too.
"the sum of the digits of a number k modulo 2 is equivalent to k mod 2" - 12?
Basically all "solutions" are regurgitated techniques from math competitions which are used completely incorrectly but with a lot of confidence
So, "it is really mostly a matter of luck", and just do so much work that they decide to fire someone else... sigh. If only there was some way for the workers to negotiate collectively for working terms that avoid mass layoffs whenever the interest rates go the wrong way.
I think the anecdote highlights that there's no incremental way to approach gRPC, it's not a low risk small footprint prototype project that can be introduced slowly and integrated with existing systems and environments. Which, well, it is a bit of a fault of gRPC.
I think that's not true. There are plenty of incremental ways to adopt gRPC. For example, there are packages that can facade/match your existing REST APIs[1][2].
You haven't demonstrated most of these assertions. For example, if everyone gambled against everyone else every second, then that system has a pretty good chance of staying close to equilibrium for a long time, whereas your model indicates that the total utility would be depleted almost immediately. Whereas if everyone was fully insured for absolutely every risk in their life and immediately received a replacement of the exact same value on any loss, then the overall system would just steadily trend to all the money ending at the insurer, which doesn't seem like increased total utility
>>You haven't demonstrated most of these assertions. For example, if everyone gambled against everyone else every second, then that system has a pretty good chance of staying close to equilibrium for a long time
>>whereas your model indicates that the total utility would be depleted almost i
Can you describe specific assumptions about how that "everyone gambling against everyone else" would look like? I just don't see how my model could predict total utility being depleted very quickly while the model having good chance to stay close to equilibrium.
My model is very simple: apply utility function on wealth. When you model people flipping coins against each other you will see a lot of busted ones and a lot of rich ones pretty quickly and that will mean significant utility decrease.
Ok, take 100 people, with $100 each, and have one round of $1 coin flips between each 2. A significant number of bets overall, 4950. Each person has wagered 1% of their net worth 99 times, something that we all agree sounds quite scary. And yet there will be no busted people, most will likely be between $80 and $120. Repeat this 10 times, a ridiculous amount of gambling - still most likely no bankruptcies, and the total utility, if we assume log, has barely dropped by 1%.
I simply do not believe that we are making such a subtle societal optimization by frowning upon gambling while encouraging all kinds of other risk taking, like investments and properties.
And the other scenario where insurance just acts as a drain on the overall system seems to indicate that it is not inherently positive for utility either
This is a bit of a stretch from what I said - 1% drop after the entire population has gambled through 10x their net worth is not meaningful. I also pointed out other speculative activities which we encourage, presumably because they compensate by growing the economy. Insurance might preserve or increase equality, but it also might extract so much rent that the overall utility is lower. There is simply no cut and dry explanation - for some parameter choices things work the way you say, and for some they don't
I think you are simply wrong about your assumptions. If the system is semi-stable the total utility won't decrease that much. If it's not stable it will.
I am not sure exactly what you are missing there but maybe that it's expected utility going down, not utility going down with every outcome. For example if people with 80 and 120 net worth flip a coin for 20 it might be go up or it might go down but u(60) + u(140) < 2x u(100). Maybe that's why you utility model predicts the total utility collapsing quickly while in fact it predicts slow utility decrease (if the whole setup is close to being stable).
My model was not to demonstrate that utility doesn't go down ever, it was to show that it can do that extremely slowly, which makes the utility argument about why we discourage it societally a bit weak - we're clearly not very good at discouraging any other behaviors resulting in long-term bad outcomes (for society or the planet), and we reward all sorts of risk-taking.
I think the simpler explanation is that gambling is seen as addictive and destructive on an individual level, and there is no need for total utility to explain why that's undesirable
Ah, a brilliant idea, design all languages so people who only know C can make small changes to existing Ruby code without getting confused. Definitely don't want to enable different abstractions or models of computation. Are these core principles in codebases you mentioned mostly "how to write stuff using C paradigms"? After all, "the Real Programmer can write FORTRAN in any language"
Their "proof" that there are only 12 solutions to 8 Mutually Non-Attacking Queens on Chessboard is just "That there are only these 12 can be proved by brute force." :/
In my university math departments putnam problem competition (they'd just be on the wall, prize was a $40 giant pizza each week) they would accept the most elegant solution, so if nobody else submitted something better I'd get a pizza for just running a few lines of python.
I'm not gatekeeping proofs here, and I'm glad you got math pizza :) If proofwiki had exhaustively printed all possible arrangements, or the decision tree of constructing them, or if they had even included the code that would do the checking (like, say, https://www.richard-towers.com/2023/03/11/typescripting-the-...), then I would agree it counts. But without even a rough ballpark estimate of possible arrangements to check, asserting "brute force" does not make a proof. If I incorporate understanding of the problem, I can see that at most we need to check 8!, which is reasonable. But if the constraints were not so simple, then we might be dealing with 64-choose-8 cases instead, which is heading into not-reasonable.
They can add the same sentence under every finite fact in their wiki, but then it won't be a proof wiki, it would be a list of numeric facts they checked by brute force and we can either trust them, or check ourselves.
The task proving some statement and the task of finding the shortest, or a "reasonably short" proof, are very different endeavours.
The first is about certainty that a statement is valid ("true"). The other is about simplifying the understanding of _why_ it is valid. Most of the time, you don't care much about the latter.
At current rates, whatever is done on a supercomputer today is done by a cheap pocket-size device just decades later. So, I'm not too worried about this case.
One of the first famous examples of this is the four-coloring theorem. I don't know any serious mathematician who is not certain of that result.