[Former member of that world, roommates with one of Ziz's friends for a while, so I feel reasonably qualified to speak on this.]
The problem with rationalists/EA as a group has never been the rationality, but the people practicing it and the cultural norms they endorse as a community.
As relevant here:
1) While following logical threads to their conclusions is a useful exercise, each logical step often involves some degree of rounding or unknown-unknowns. A -> B and B -> C means A -> C in a formal sense, but A -almostcertainly-> B and B -almostcertainly-> C does not mean A -almostcertainly-> C. Rationalists, by tending to overly formalist approaches, tend to lose the thread of the messiness of the real world and follow these lossy implications as though they are lossless. That leads to...
2) Precision errors in utility calculations that are numerically-unstable. Any small chance of harm times infinity equals infinity. This framing shows up a lot in the context of AI risk, but it works in other settings too: infinity times a speck of dust in your eye >>> 1 times murder, so murder is "justified" to prevent a speck of dust in the eye of eternity. When the thing you're trying to create is infinitely good or the thing you're trying to prevent is infinitely bad, anything is justified to bring it about/prevent it respectively.
3) Its leadership - or some of it, anyway - is extremely egotistical and borderline cult-like to begin with. I think even people who like e.g. Eliezer would agree that he is not a humble man by any stretch of the imagination (the guy makes Neil deGrasse Tyson look like a monk). They have, in the past, responded to criticism with statements to the effect of "anyone who would criticize us for any reason is a bad person who is lying to cause us harm". That kind of framing can't help but get culty.
4) The nature of being a "freethinker" is that you're at the mercy of your own neural circuitry. If there is a feedback loop in your brain, you'll get stuck in it, because there's no external "drag" or forcing functions to pull you back to reality. That can lead you to be a genius who sees what others cannot. It can also lead you into schizophrenia really easily. So you've got a culty environment that is particularly susceptible to internally-consistent madness, and finally:
5) It's a bunch of very weird people who have nowhere else they feel at home. I totally get this. I'd never felt like I was in a room with people so like me, and ripping myself away from that world was not easy. (There's some folks down the thread wondering why trans people are overrepresented in this particular group: well, take your standard weird nerd, and then make two-thirds of the world hate your guts more than anything else, you might be pretty vulnerable to whoever will give you the time of day, too.)
TLDR: isolation, very strong in-group defenses, logical "doctrine" that is formally valid and leaks in hard-to-notice ways, apocalyptic utility-scale, and being a very appealing environment for the kind of person who goes super nuts -> pretty much perfect conditions for a cult. Or multiple cults, really. Ziz's group is only one of several.
"... insanity is often marked by the dominance of reason and the exclusion of creativity and humour. Pure reason is inhuman. The madman’s mind moves in a perfect, but narrow, circle, and his explanation of the world is comprehensive, at least to him."
To be fair it did work out suprisingly well in the early days, even the really weird comment chains attracted only a small minority of the bizarrely deranged. Probably because back then the median LW commentator was noticeably smarter than the median HN commentator.
Pascal’s mugging was even coined there I believe, but then as it grew… whatever communal anti-derangement protections existed gradually declined.
And it now is more often than not a negative example.
I am not against rationalism at all despite my quoting of Hume. Reason is essential and has done much good to the world. It's just that things tend to get kooky at the tail end of any distribution.
The rationalist community have some of the smartest people and the best blogs, and they think through things much more thoroughly and are much less prone to biases and fallacies than most online communities.
I feel like it's not a question if you can, but if you should.
Are you actually smart if you spend any significant amount of time thinking through things in order to be rational?
I have a good friend that is by all reasonable metrics incredibly smart. Graduated high school and college at the same time at age 16. Doctorate at 22. Professor at a top university for several years. But lives in absolute squalor and spends his time and brain power on rational thinking and philosophy to understand life.
But, it is life, and if you don't experience it how can you understand it, and if you do understand it, what is gained?
His life is the inside of his house and a daily trip to Dollar general to buy mountain dew, cigarettes and frozen burritos.
I have never seen a more sublime demonstration of the totalitarian mind, a mind which might be linked unto a system of gears where teeth have been filed off at random. Such snaggle-toothed thought machine, driven by a standard or even by a substandard libido, whirls with the jerky, noisy, gaudy pointlessness of a cuckoo clock in Hell.
Yes, exactly, “Crime and punishment” or ”Demons” or others. Some of the dialogues are exactly about the ideologies and how different characters think and apply them, how reason manifests in violence.
This made me think of C&P as well. Specifically how Raskolnikov developed his own half baked ideology where “great men” were free to act with impunity. It’s not hard to draw parallels with “longtermism” and effective altruism.
Even Aristotle knew that reason was just an aspect of being a human and not the whole thing
To be honest the only philosopher I know of who convincingly argued that everything is reason is Hegel, but he did so more by making the idea of reason so broad that even empiricism (and emotion, humour, love, the body, etc.) falls under it...
Hegel still has a really bad reputation regarding atrocities and his Philosophy of history.
"History as the slaughter-bench" - and yet the aims of reason are accomplished."
But there are also Hegel scholars (Walter Jaeschke for example) who simply consider these accusations to be uneducated and that he does not see the atrocities of history as reasonable, but on the contrary makes criticism possible in the first place.
All of Hegel, and most of his descendants, are fashionable nonsense. All types of "Dialectics" are fake and don't exist. It's telling that the most common version of the term "Dialectics" that everyone thinks Hegel coined was actually coined by one of his (many butthurt) students, Fitche.
Philosophy will continue to be bankrupt for as long as Hegel's stranglehold on Philosophy remains. Kill his thought, Kill "dialectics", Kill the "world spirit" or "Geist". Otherwise philosophy continues down the "post modern neo marxism" loony world that has led so many to turn reactionary.
Edit (in response to the comment cus I can't reply faster since Dang's HN policies are bad):
I cover to cover read his shitty books. They weren't worth opening, let alone reading. This is the same for most of the rest of the "postmodern" canon.
Competitive debate meant that we weaponized these long dead idolaters for our own needs. I've (unfortunately) read Zizek, Foucault, Derrida, Deleuze, Sartre, Heidigger, etc. I regret most of the time I spent reading these authors. They are all intellectually bankrupt and many of them are straight up pseudo-scientific charlatan snake oil salespeople (Lacan).
So you’ve discarded a lot of modern , post modern and contemporary thinkers without offering some alternative or do you mean that there isn’t much to philosophy in general? I’m curious because I have also come to a similar conclusion but I have to admit that I have not read everyone that you’ve mentioned but I have read summaries and analysis.
> This is the same for most of the rest of the "postmodern" canon.
I genuinely can’t think of anyone less postmodern than Hegel. He’s a pure rational traditionalist who believed completely in objective truth and morality and grand narratives lol
This sounds a lot like the psychopath Anton Chigurh in the movie No Country for Old Men. His view of the world is he is the arbiter of people's destiny, which often involves them being murdered.
Another thing I'll add after having spent a few years in a religious cult:
It's all about the axioms. If you tweak the axioms you can use impeccable logic to build a completely incorrect framework that will appeal to otherwise intelligent(and often highly intelligent) people.
Also, people are always less rational than they are able to admit. The force of things like social connection can very easily warp the reasoning capabilities of the most devout rationalist(although they'll likely never admit that).
Im kinda skeptical these folks were following some hyper-logical process from flawed axioms that led them to the rigorous solution: "I should go stab our land-lord with a samurai sword" or "I should start a shootout with the Feds".
The rationalist stuff just seems like some meaningless patter they stuck ontop of more garden variety cult stuff.
The axioms of rationality, morality, etc. I've always found interesting.
We have certain axioms, (let me chose an arbitrary, and possibly not quite an axiomy-enough example): "human life has value". We hold this to be self-evident and construct our society around it.
We also often don't realize that other people and cultures have different axioms of morality. We talk/theorize at this high level, but don't realize we have different foundations.
Wow, what a perfect description of why their probability-logic leads to silly beliefs.
I've been wondering how to argue within their frame for a while, and here's what I've come up with: Is the likelihood that aliens exist, are unfriendly, and AGI will help us beat them higher or lower than the likelihood that the AGI itself that we develop is unfriendly to us and wants to FOOM us? Show your work.
It’s pointless. They aren’t rational. Any argument you come up with that contradicts their personal desires will be successfully “reasoned” away by them because they want it to be. Your mistake was ever thinking they had a rational thought to begin with, they think they are infallible.
"Widespread robots that make their own decisions autonomously will probably be very bad for humans if they make decisions that aren't in our interest" isn't really that much of a stretch is it?
If we were going slower maybe it would seem more theoretical. But there are multiple Manhattan-Project-level or (often, much) larger efforts ongoing to explicitly create software and robotic hardware that makes decisions and takes actions without any human in the loop.
We don't need some kind of 10000 IQ god intelligence if a glitch token causes the majority of the labor force to suddenly and collectively engage in sabotage.
None of those projects are even heading in the direction of "AGI". The state of AI today is something akin to what science fiction would have called an "oracle", a devise that can answer questions intelligently or seemingly-intelligently but otherwise isn't intelligent at all: it can't learn, has no agency, does nothing new. Even if that can be scaled up indefinitely, there's no reason to believe that it will ever become an AGI.
If any of this can make decisions in a way that a human can, then I would start to question what human decision-making really amounts to.
For what it's worth, as the person at the top of this thread, I actually do take AI risk pretty seriously. Not in a singulatarian sense, but in the sense that I would be quite surprised if AI weren't capable of this stuff in ten years.
Even the oracle version is already really dangerous in the wrong hands. A totalitarian government doesn't need to have someone listening to a few specific dissidents if they can have an AI transcriber write down every word of every phone conversation in the country, for example. And while it's certainly not error-proof, asking an LLM to do something like "go through this list of conversations and flag anything that sounds like anti-government sentiment" is going to get plenty of hits, too.
> "Widespread robots that make their own decisions autonomously will probably be very bad for humans if they make decisions that aren't in our interest" isn't really that much of a stretch is it?
We already have widespread humans making their own decisions autonomously that aren’t in the best interest of humans, and we’re all still here.
Much of philosophy throughout history seems to operate this way.
I think philosophy is a noble pursuit, but it's worth noting how often people drew very broad conclusions, and then acted on them, from not very much data. Consider the dozens of theories of the constitution of the world from the time of the Greek thinkers (even the atomic theory doesn't look very much at all like atoms as we now understand them), or the myriad examples of political philosophies that ran up against people simply not acting the way the philosophy needed them to act to cohere.
The investigation of possibility is laudable, but a healthy and regular dose of evidence is important.
> Much of philosophy throughout history seems to operate this way.
“Philosophy is poor at revealing truths but excellent at revealing falsehoods (or at least unsupported arguments)” was the main lesson I took from informally studying it.
Anything about the self bumps into an immediate problem here. For instance I cannot prove to you that I'm conscious and not simply an automaton who's not actually thinking. My evidence for such is strictly personal - I can personally testify to my own experience of consciousness, but you have no reason to believe me since that's not evidence.
And in fact even for myself - perception is not necessarily valid evidence since perception can be distorted. If I am in a compelling VR game I might be more than willing to swear to the fact that I'm flying (if I wasn't otherwise aware of the situation) - while you simply look at me acting a fool standing still while vigorously flapping my arms.
... so at some point, one realizes one has pondered one's way into untestables and goes back to living. Or doesn't, I guess, and then gets kept up at night anxious about the notion that in some as-yet-unrealized future, an AI is forever torturing an identical copy of oneself that one cannot possibly ever meet.
The programming analogy to this kind of philosophy is writing design docs (or building a class hierarchy of abstracts) without ever writing implementation. Lots of work, but why should anyone outside the room care?
It contradicts the ideals of an evidence based system of values. Most of what we believe we believe because we think it is right, and there's always , more or less, viable arguments for most of any remotely reasonable view. And this applies to all people. For instance it was none other than Max Planck that observed, "Science progresses one funeral at a time."
I also think this is for the best. If one looks at the evidence of the skies it's indisputable that humanity lies at the center of the cosmos, with everything revolving around us - which, in turn, naturally leads into religious narratives around Earth. It's only thanks to these weirdos that adopted quite absurd sounding (for the time) systems of views and values, completely without meaningful evidence at first, that we were able to gradually expand, and correct, our understanding of the universe.
And this doesn't just apply to the ancient past. Einstein's observation that the speed of light stays fixed while the rate of passage of time itself is variable, to enable the former, sounds so utterly and absolutely absurd. In many ways the real challenge in breaking through to relativity wasn't the math or anything like that (which, in fact, Einstein himself lacked when first developing the concept) but accepting that a concept that sounds so wrong might actually be right.
Excellent point. And it shines a light directly on what I'm saying. There's a great line from the Hitchhiker's Guide to the Galaxy:
""""Oh, that was easy," says Man, and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing.”"""
... and, of course, the philosopher can argue that we are all of us tumbling through a Pachinko-machine of parallel universes, and it is only our constrained perception that suggests Man is dead; in a parallel universe, Man is just fine and living their days in the grey universe that we, through some as-yet-unphilosophized-but-don't-worry-we're-thinking-hard-about-it process, cannot exist in.
... but does that matter over-much if the observable is that ignoring zebra crossings gets you crossed out of this universe?
very few philosophers dared to live by their theories. the famous failures of aristotle (in the case of alexander) and plato in syracuse (where he saw firsthand that the philosopher-king is at best a book character) are good examples. seneca didn’t live stoically: he was avaricious and didn’t bother to incite civil war over unpaid debt, if ancient sources are to be believed. he failed horribly with nero, who later instructed him to commit suicide for treasonous crimes. again, if ancient sources are to be believed, he fumbled his suicide out of raw fear.
the cynics, though, made a good life but that’s not because they had a better philosophy. it’s because cynicism is base/primitive logic available to the brute as well as the civilized man.
I actually have on my desk, right now, Mary Wollstonecraft's "A Vindication of the Rights of Woman."
To summarize one of her points from memory, she basically lays into Rousseau about having a lot of opinions on domestic life for someone who clearly doesn't even know how to set foot in a kitchen.
AGI would be extremely helpful in navigating clashes with aliens, but taking the time to make sure it's safe is very unlikely to make a difference to whether it's ready in time. Rationalists want AGI to be built, and they're generally very excited about it, e.g. many of them work at Anthropic. They just don't want a Move Fast and Break Things pace of development.
> The mugger argues back that for any low but strictly greater than 0 probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet.
This is a basic logic error. It ignores the very obvious fact that increasing the reward amount decreases the probability that it will be returned.
E.g. if the probability of the reward R being returned is (0.5/R) we get "a low by strictly greater than 0 probability", and for that probability there is a (different) finite reward that would make it rational to take the bet, but it's not R.
This is even simpler and more stupid than the "proofs" that 0=1. It does not change my opinion that philosophers (and lesswrong) are idiots.
That's what the person responding to meant - attempts to make human systems "rational" often involves simplifying dependent probabilities and presenting them as independent.
The rationalist community both frequently makes this reasoning error and is aware they frequently make this error, and coined this term to refer to the category of reasoning mistake.
> coined this term to refer to the category of reasoning mistake.
That's not at all what the Wikipedia article for it says. It presents it as an interesting paradox with several potential (and incorrect!) "remedies" rather than a category of basic logical errors.
The “quadrillion days of happiness” offered to a rational person gives away that such allegories are anthropomorphized just for the sake of presentation. For the sake of what the philosophers mean, you should probably imagine this as an algorithm running on a machine (no AGI).
It’s a mental tease, not a manual on how to react when faced with a mugger who forgot his weapon at home but has an interesting proposition to make.
Similarly the trolley problem isn’t really about real humans in that situation, or else the correct answer would always be “do nothing”.
It’s what the comment here [0] says. If you try to analyze everything purely rationally it will lead to dark corners of misunderstanding and madness.
> Similarly the trolley problem isn’t really about real humans in that situation, or else the correct answer would always be “do nothing”.
The correct answer in case of it being about real people, of course, is to switch immediately after the front bogey makes it through. This way, the trolley will derail and make a sharp turn before it runs over anyone, and stop.
The passengers will get shaken, but I don’t remember fatalities being reported when such things happened for real with real trams.
The scenario is set up by an evil philosopher though, so they can tie up the people arbitrarily close to the split in the rails, such that your solution doesn’t work, right?
In this case, it won’t matter, I’m afraid, which way the trolley goes as it will at least mangle both groups of people, and the only winning move is to try to move as many people as possible away from the track.
An Eastern European solution is to get a few buddies to cut the electrical wire that powers the trolleys and sell it for scrap metal, which works on all electrical trolleys. (After the trolley stops, it can be scavenged for parts you can sell as scrap metal, too.)
Made me chuckle. Funny 'cause it's true. About the trolley problem, if taken literally (people on tracks, etc.) pulling the lever exposes you to liability: you operated a mechanism you weren't authorized to use and for which you had no prior training, and you decided to kill one innocent person that was previously safe from harm.
Giving CPR is a very tame/safe version of the trolley problem and in some countries you're still liable for what happens after if you do it. Same when donating food to someone who might starve. Giving help has become a very spiny issue. But consciously harming someone when giving help in real life is a real minefield.
P.S. These philosophical problems are meant to force a decision from the options given. So assume the the problem is just a multiple choice one, 2 answers. You don't get to write a third.
> P.S. These philosophical problems are meant to force a decision from the options given. So assume the the problem is just a multiple choice one, 2 answers. You don't get to write a third.
I know about it. And yet I refuse to play the game. The problem is that even philosophers should be able to acknowledge that in the real universe, no box should be too big to prevent from thinking outside of it.
Otherwise we get people who conflate map with the territory, like what this whole comment thread is about.
> The “quadrillion days of happiness” offered to a rational person gives away that such allegories are anthropomorphized just for the sake of presentation.
So what? It's still presented as if it's a interesting problem that needs to be "remedied", when in fact it's just a basic maths mistake.
If I said "ooo look at this paradox: 1 + 1 = 2, but if I add another one then we get 1 + 1 + 1 = 2, which is clearly false! I call this IshKebab's mugging.", you would rightly say "that is dumb; go away" rather than write a Wikipedia article about the "paradox" and "remedies".
> Similarly the trolley problem isn’t really about real humans in that situation, or else the correct answer would always be “do nothing”.
It absolutely wouldn't. I don't know how anyone with any morals could claim that.
Interestingly, the trolley problem is decided every day, and humanity does not change tracks.
There are people who die waiting for organ donors, and a single donor could match multiple people. We do not find an appropriate donor and harvest them. This is the trolley problem, applied.
I would pull the lever in the trolley problem and don't support murdering people for organs.
The reason is that murdering people for organs has massive second-order effects: public fear, the desire to avoid medical care if harvesting is done in those contexts, disproportionate targeting of the organ harvesting onto the least fortunate, etc.
The fact that forcibly harvesting someone’s organs against their will did not make your list is rather worrying. Most people would have moral hangups around that aspect.
Yea, it doesn’t seem quite right to say that the trolley problem isn’t about really people. I mean the physical mechanical system isn’t there but it is a direct abstraction of decisions we make every day.
My actual words quoted below give one extra detail that makes all the difference, one that I see people silently dropped in a rush to reply. The words were aimed at someone taking these problems in a too literal sense, as extra evidence that they are not to be taken as such but as food for though that has real life applicability.
> the trolley problem isn’t really about real humans in that situation
> We do not find an appropriate donor and harvest them. This is the trolley problem, applied.
I don't think that matches the trolley problem particularly well for all sorts of reasons. But anyway your point is irrelevant - his claim was that the trolley problem isn't about real humans, not that people would pull the lever.
Edit: never mind, I reread your comment and I think you were also agreeing with that.
> his claim was that the trolley problem isn't about real humans
Is it though? Let's look at the comment [0] written 8h before your reply:
> the trolley problem isn’t really about real humans in that situation
As in "don't take things absolutely literally like you were doing, because you'll absolutely be wrong". You found a way to compound the mistake by dropping the critical information then taking absolutely literally what was left.
It seems that you didn't understand the main point of the exposition. I'll summarize the ops comment a bit further.
Points 1 and 2 only explain how they are able to erroneously justify their absurd beliefs, they don't explain why they hold those beliefs.
Points 3 through 5 are the heart of the matter; egotistical and charismatic (to some types of people) leaders, open minded, freethinking and somewhat weird or marginalized people searching for meaning plus a way for them all to congregate around some shared interests.
TLDR: perfect conditions for one or more cults to form.
No, it’s the “rationality.” Well maybe the people too, but the ideas are at fault.
As I posted elsewhere on this subject: these people are rationalizing, not rational. They’re writing cliche sci-fi and bizarre secularized imitations of baroque theology and then reasoning from these narratives as if they are reality.
Reason is a tool not a magic superpower enabling one to see beyond the bounds of available information, nor does it magically vaporize all biases.
Logic, like software and for the same reason, is “garbage in, garbage out.” If even one of the inputs (premises, priors) is mistaken the entire conclusion can be wildly wrong. Errors cascade, just like software.
That's why every step needs to be checked with experiment or observation before a next step is taken.
I have followed these people since stuff like Overcoming Bias and LessWrong appeared and I have never been very impressed. Some interesting ideas, but honestly most of them were recycling of ideas I’d already encountered in sci-fi or futurist forums from way back in the 1990s.
The culty vibes were always there and it instantly put me off, as did many of the personalities.
“A bunch of high IQ idiots” has been my take for like a decade or more.
> As I posted elsewhere on this subject: these people are rationalizing, not rational.
That is sometimes true, but as I said in another comment, I think this is on the weaker end of criticisms because it doesn't really apply to the best of that community's members and the best of its claims, and in either case isn't really a consequence of their explicit values.
> Logic, like software and for the same reason, is “garbage in, garbage out.” If even one of the inputs (premises, priors) is mistaken the entire conclusion can be wildly wrong. Errors cascade, just like software.
True, but an odd analogy: we use software to make very important predictions all the time. For every Therac-25 out there, there's a model helping detect cancer in MRI imagery.
And, of course, other methods are also prone to error.
> That's why every step needs to be checked with experiment or observation before a next step is taken.
Depends on the setting. Some hypotheses are not things you can test in the lab. Some others are consequences you really don't want to confirm. Setting aside AI risk for a second, consider the scientists watching the Trinity Test: they had calculated that it wouldn't ignite the atmosphere and incinerate the entire globe in a firestorm, but...well, they didn't really know until they set the thing off, did they? They had to take a bet based on what they could predict with what they knew.
I really don't agree with the implicit take that "um actually you can never be certain so trying to reason about things is stupid". Excessive chains of reasoning accumulate error, and that error can be severe in cases of numerical instability (e.g. values very close to 0, multiplications, that kind of thing). But shorter chains conducted rigorously are a very important tool to understand the world.
> "um actually you can never be certain so trying to reason about things is stupid"
I didn't mean to say that, just that logic and reason are not infallible and have to be checked. Sure we use complex software to detect cancer in MRI images, but we constantly check that this software works by... you know... seeing if there's actual cancer where it says there is, and if there's not we go back around the engineering circle and refine the design.
Let's say I use the most painstaking, arduous, careful methods to design an orbital rocket. I take extreme care to make every design decision on the basis of physics and use elaborate simulations that my designs are correct. I check, re-check, and re-check. Then I build it. It's never flown before. You getting on board?
Obviously riding on an untested rocket would be insane no matter how high-IQ and "rational" its engineers tried to be. So is revamping our entire political, economic, or social system on the basis of someone's longtermist model of the future that is untestable and unverifiable. So is banning beneficial technologies on the basis of hypothetical dangers built on hypothetical reasoning from untestable priors. And so on...
... and so is, apparently, killing people, because reasons?
>They have, in the past, responded to criticism with statements to the effect of "anyone who would criticize us for any reason is a bad person who is lying to cause us harm".
Which leader said anything like that? Certainly not Eliezer or the leader of the Center for Applied Rationality (Anna Salamon) or the project lead of the web site lesswrong.com (Oliver Habryka)!
Hello, can confirm, criticism is like the bread and butter of LW, lol. I have very extensively criticized tons of people in the extend rationality ecosystem, and I have also never seen anyone in any leadership position react with anything like this quote. Seems totally made up.
> I feel like it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics* and will hurt you if you trust them and will break rules to do so; but in case it wasn't obvious, consider the point made explicitly.
> (Let not this post be construed as casting aspersions on any of the many, many people who've had honest disagreements with me or us, including loud or heated or long ones, that they conducted by debates about ideas rather than insinuations about people.)
Creepy. But after people argued with Eliezer for a considerable time --he made 11 updates. The result was less shockingly bad:
> There's a certain cluster of behaviors and attitudes, which includes things like "getting excited about opportunities to make fun of furries" or uttering phrases like "group X is a bunch of neckbeards". Notably, this is not the same cluster as "strongly and vocally disagreeing with group X about idea Y". Call the first cluster "frobnitz".
> I feel like it should have been obvious to anyone at this point that anybody who openly frobnitzes me, or even more so frobnitzes this community, or even more so still frobnitzes a genuine cinnamon-roll-grade Level Ten Arch-Bodhisattva like Scott Alexander or Scott Aaronson, probably lacks an internal commitment to ordinary interpersonal ethical injunctions and will hurt you if you trust them and will break rules to do so. But in case it wasn't obvious, consider the point made explicitly. (Subtext: Topher Brennan. Do not provide any link in comments to Topher's publication of private emails, explicitly marked as private, from Scott Alexander.)
If this is the only evidence-- OPs allegation is exaggerated but not 'totally made up'.
I'm pretty sure I remember a post on Eliezer's Facebook from the early 2010s. I have definitely witnessed some... well - you know, 'culty' vibes and social pressure around Less Wrong.
> Rationalists, by tending to overly formalist approaches,
But they don't apply formal or "formalist" approaches, they invoke the names of formal methods but then extract from them just a "vibe". Few to none in the community know squat about actually computing a posterior probability, but they'll all happily chant "shut up and multiply" as a justification for whatever nonsense they instinctively wanted to do.
> Precision errors in utility calculations that are numerically-unstable
Indeed, as well as just ignoring that uncertainties about the state of the world or the model of interaction utterly dominate any "calculation" that you could hope to do. The world at large is does not spend all its time in lesswrongian ritual multiplication or whatever... but this is not because they're educated stupid. It's because in the face of substantial uncertainty about the world (and your own calculation processes) reasoning things out can only take you so far. A useful tool in some domains, but not a generalized philosophy for life ... The cognitive biases they obsess about and go out of their way to eschew are mostly highly evolved harm mitigation heuristics for reasoning against uncertainty.
> that is particularly susceptible to internally-consistent madness
It's typical for cults to cultivate vulnerable mind states for cult leaders to exploit for their own profit, power, sexual fulfillment, etc.
A well regulated cult keeps its members mental illness within a bound that maximized the benefit for the cult leaders in a sustainable way (e.g. not going off and murdering people, even when doing so is the logical conclusion of the cult philosophy). But sometimes people are won over by a cult's distorted thinking but aren't useful for bringing the cult leaders their desired profit, power, or sex.
> But they don't apply formal or "formalist" approaches, they invoke the names of formal methods but then extract from them just a "vibe".
I broadly agree with this criticism, but I also think it's kind of low-hanging. At least speaking for myself (a former member of those circles), I do indeed sit down and write quantitative models when I want to estimate things rigorously, and I can't be the only one who does.
> Indeed, as well as just ignoring that uncertainties about the state of the world or the model of interaction utterly dominate any "calculation" that you could hope to do.
This, on the other hand, I don't think is a valid criticism nor correct taken in isolation.
You can absolutely make meaningful predictions about the world despite uncertainties. A good model can tell you that a hurricane might hit Tampa but won't hit New Orleans, even though weather is the textbook example of a medium-term chaotic system. A good model can tell you when a bridge needs to be inspected, even though there are numerous reasons for failure that you cannot account for. A good model can tell you whether a growth is likely to become cancerous, even though oncogenesis is stochastic.
Maybe a bit more precisely, even if logic cannot tell you what sets of beliefs are correct, it can tell you what sets of beliefs are inconsistent with one another. For example, if you think event X has probability 50%, and you think event Y has probability 20% conditional on X, it would be inconsistent for you to believe event Y has a probability of less than 10%.
> The world at large is does not spend all its time in lesswrongian ritual multiplication or whatever... but this is not because they're educated stupid
When I thought about founding my company last January, one of the first things I did was sit down and make a toy model to estimate whether the unit economics would be viable. It said they would be, so I started the company. It is now profitable with wide operating margins, just as that model predicted it would be, because I did the math and my competitors in a crowded space did not.
Yeah, it's possible to be overconfident, but let's not forget where we are: startups win because people do things in dumb inefficient ways all the time. Sometimes everyone is wrong and you are right, it's just that that usually happens in areas where you have singularly deep expertise, not where you were just a Really Smart Dude and thought super hard about philosophy.
What you describe (doing basic market analysis) is pretty much unrelated to 'rationality.'
'Rationality' hasn't really made any meaningful contributions to human knowledge or thinking. The things you describe, are all products of scientists and statisticians, etc...
Bayesian statistics is not rationality. It is just Bayesian statistics... And it is mathematicians who should get the credit not less wrong!!
The rationalist movement, if it is anything, is a movement defined by the desire of its members to perceive the world accurately and without bias. In that sense, using a variety of different tools from different academic and intellectual disciplines (philosophy, economics, mathematics, etc) should be expected. I don't think any of the major rationalist figures (Yudkowsky, Julia Galef, Zvi Mowshowitz) would claim any credit for developing these ideas; they would simply say they've used and helped popularize a suite of tools for making good decisions.
Perhaps I overemphasized it, but a personal experience on that front was key to realizing that the lesswrong community was in aggregate a bunch of bullshit sophistic larpers.
In short, some real world system had me asking a simply poised probabilities question. I eventually solved it. I learned two things as a result, one (which I kinda knew, but didn't 'know' before) is that the formal answer to even very simple question can be extremely complicated (e.g. asking for the inverse of a one line formal turning into a half page of extremely dense math), and two that many prominent members of the lesswrong community were completely clueless about the practice of the tools they advocate, not even knowing the most basic search keywords or realizing that there was little hope of most of their fans ever applying these tools to all but the absolute simplest questions.
> You can absolutely make meaningful predictions about the world despite uncertainties. A good model can tell you that a hurricane might
Thanks for the example though-- reasoning about hurricanes is the result of decades of research by thousands of people, the inputs involve data from thousands of weather stations including floating buoys, multiple satellites, and aircraft that fly through the storms to get data. The calculations include numerous empirically derived constants that provide averages for unmeasureable quantities for inputs that the models need plus adhoc corrections to fit model outputs to previously observed behavior.
And the results, while extremely useful, are vague and not particularly precise-- there are many questions they can't answer.
While it is a calculation, it is very much an example of empiracy being primary over reason.
And if someone is thinking that our success with hurricane modeling tells them anything about their ability to 'reason things out' from their own life, without decades of experience, data collection, satellite monitoring, teams of PHD, then they're just mistaken. It's just not comparable.
Reasoning things out, with or without the aid of data, can absolutely be of use. But that utility is bounded by the quality of our data, our understanding of the world, errors in our reasoning process, etc. And people do engage in that level of reasoning all the time. But it's not more primary than it is because of the significant and serious limitations.
I suspect that the effort require to calculate things out also comes with a big risk of overconfidence. Like, stick your thumb in the air, make some rough cash flow calculations, etc. That's a good call and probably captures the vast majority of predictive power for some new business. But if instead you make some complicated multi-agent computational model of the business it might only have a little be more predictive power but a lot more risk of following it off a cliff when experience is suggesting the predictions were wrong.
> people do things in dumb inefficient ways all the time
Or, even more often, they're optimizing for a goal different than yours, one that might not even be legible to you!
> just as that model predicted it would be, because I did the math and my competitors in a crowded space did not.
or so you think! Often organizations fail to do "obvious" things because there are considerations that just aren't visible or relevant to outsiders, rather than any failure of reasoning.
For example, I've been part of an org that could have pivoted to a different product and made more money... but doing so would have meant laying off a bunch of people that everyone really liked working with. The extra money wasn't worth it. Whomever eventually scooped up that business might have thought they were smart for seeing it where we didn't, but if so they'd be wrong about why we didn't do it. We saw the opportunity and just had different objectives.
I wouldn't for a moment argue that collections of people don't do stupid things, they do-- but there is a lot less stupid than you might assume on first analysis.
> it's just that that usually happens in areas where you have singularly deep expertise, not where you were just a Really Smart Dude and thought super hard about philosophy
We agree completely there-- but it's really about the data and expertise. Sure, you have to do the thinking to connect the dots, and then have the courage and conviction (or hunger) to execute on it. You may need all three of data, expertise, and fancy calculations. But the third is sometimes optional and the former two are almost never optional and usually can only be replaced by luck, not 'reasoning'.
This is an excellent explanation of the flaws inherent in the rationalist philosophy. I’m not deeply involved in the community but it seems like there’s very little appreciation of the limits of first principles based reasoning. To put it simply, there are ideas and phenomena that are strictly inaccessible to pure logical deduction. This also infects their thoughts about AI doom. There’s very little mention of how the AI will collect data or marshall physical resources. Instead it just reads physics textbooks then infers an infallible plan to amass infinite power. It’s worth noting that the AGI god’s abilities seem to align pretty well with Yudkowsky’s conception of why he is a superior person.
> A good model can tell you that a hurricane might hit Tampa but won't hit New Orleans
On the other hand we have no model to predict that hurricane a year in advance and tell us which city it’ll hit.
Yet these people believe they can rationalise about far more unpredictable events far further in the future.
That is, I agree that they completely ignore the point at which uncertainties utterly dominate any calculation you might try to do and yet continue to calculate to a point of absurdity.
I noticed years ago too that AI doomers and rationalist type were very prone to (infinity * 0 = infinity) types of traps, which is a fairly autistic way of thinking. Humanity long time ago decided that infinity * 0 = 0 for very good practical reasons.
> Humanity long time ago decided that infinity * 0 = 0
I'm guessing you don't mean this in any formal mathematical sense, without context, infinity multiplied by zero isn't formally defined. There could be various formulations and contexts where you could define / calculate something like infinity * zero to evaluate to whatever you want. (e.g. define f(x) := C x and g(x) := 1/x, What does f(x) * g(x) evaluate to in the limit as x goes to infinity? C. And we can interpret f(x) as going to infinity while g(x) goes to zero, so we can use that to justify writing "infinity * 0 = C" for an arbitrary C... )
So, what do you mean by "infinity * 0 = infinity" informally? That humans regard the expected value of (arbitrarily large impact) * (arbitrarily small probability) as zero?
It's true in the informal sense. Normal people, when considering an "infinitely" bad thing happening (being killed, losing their home, etc) with a very low probability will round that probability to zero ("It won't happen to ME"), multiply the two and resultantly spend zero time worrying about it, planning for it, etc.
For instance, a serial killer could kill me (infinitely bad outcome) but the chance of that happening is so tiny I treat it as zero, and so when I leave my house every day I don't look into the bushes for a psycho murderer waiting there for me, I don't wear body armor, I am unarmed, I don't even think about the chance of being killed by a serial killer. For all practical intents and purposes I treat that possibility as zero.
Important to remember that different people gave different thresholds at which they round to zero. Some people run through dark parking garages and jump into their car because they don't round the risk of a killer under their car slashing their achilles tendons down to zero. Some people carry a gun everywhere they go, because they don't round the risk of encountering a mass shooter to zero. Some people invest their time and money pursuing spaceflight development because they don't round a dino-killing asteroid to zero. A lot of people don't round the chance of wrecking a motorcycle to zero, and therefore don't buy one even though they look like fun.
The lesswrong/rationalist people have a tendency to have very low thresholds at which they'll start to round to zero, at least when the potential harm would be met out to a large portion of humanity. Their unusually low threshold leads them to very unusual conclusions. They take seriously possibilities which most people consider to be essentially zero, giving rise to the perception that rationalists don't think that infinity * 0 = 0.
> It's true in the informal sense. Normal people, when considering an "infinitely" bad thing happening (being killed, losing their home, etc) with a very low probability will round that probability to zero ("It won't happen to ME"), multiply the two and resultantly spend zero time worrying about it, planning for it, etc.
Is this the kind of thing that is part of the Less Wrong cult? I see this "multiply" word being used which I understand is part of the religious technology of LW. It all seems very sophomoric. I don't know what talking about "Infinity * 0" means in an informal sense means. What I can tell you is that "Normal people" are not multiplying "infinitely bad" with a "very low probability rounded to 0". For one, this is conflating multiple senses of infinite. I'm not sure anyone thinks likes bad outcomes are "infinitely bad", maybe in a schoolyard silly-talk kind of way, they just think it is bad. I think that's basically what Less Wrong is, a lot of fancy words and Internet memes and loose-talk about AI all strewn together in a "goth for adults" or some other kind of nerd social club.
> "I don't know what talking about "Infinity * 0" means in an informal sense means"
I'm not a rationalist, I'm only using their language to make the mapping to their ideology simpler. A comet striking earth would be "infinitely bad". The chance of that happening is, as far as I'm concerned, zero (its not zero, but I round it down.) If you multiply the infinitely bad outcome by the zero percent chance of it happening, you result is that you shouldn't waste your time and emotional resources worrying about it.
Normal people don't phrase this kind of reasoning with math terminology as rationalists do, but that terminology isn't where the rationalists go wrong. Where the rationalists go wrong isn't the multiplication, it's the failure to ignore very unlikely outcomes as normal people would. They think themselves too rational to ignore the possibility of unlikely things, but ironically it is normal people who don't spend their time dwelling on extremely unlikely bullshit have a more rational approach to life.
The rationalists spend hours discussing scenarios like "What if a super AI manipulates people into engineering a super virus that wipes out humanity? Its technically possible; there's no law of physics which prevents this!", to which a normal person would respond by wondering if these people are on drugs, why would they spend so much time worrying about something which isn't going to happen?
Yeah pretty much. If I was to write it out further: "near infinity bad thing could happen but it has a near infinitesimal chance of it happening, what is the amount of finite resources you should spend to prevent it?". The numbers are probabilities and how much of an effect it is. It really is infinity * epsilon but that would confuse more people so I decided to say infinity * 0.
I was very explicit when I said "humanity decided". It doesn't matter if one or the other is the actual formal system math system result either way, it was chosen out of practicality that in this kind of philosophical issue, the more pragmatic thing was to axiomatically choose that "infinity * 0 = 0" when faced with things like this. The rationalists in a more meta/broader sense have decided that it's infinity * epsilon = infinity even if they say it is not on the surface. Their actions show they believe the other direction.
In math infinity * epsilon is indeterminate until you decide what the details of infinity & epsilon is, which I find quite fitting.
> That humans regard the expected value of (arbitrarily large impact) * (arbitrarily small probability) as zero?
There are many arguments that go something like this: We don't know the probability of <extinction-level event>, but because it is considered a maximally bad outcome, any means to prevent it are justified. You will see these types of arguments made to justify radical measures against climate change or AI research, but also in favor space colonization.
These types of arguments are "not even wrong", they can't be mathematically rigorous, because all terms in that equation are undefined, even if you move away from infinities. The nod to mathematics is purely for aesthetics.
not exactly a rationalist thing, but a lot of bay-area people will tell you that exponential growth exists, and it's everywhere
i can't think of any case where exponential growth actually happens, though. exponential decay and logistic curves are common enough, but not exponential growth
The rats I hang out with know the difference between exponential and logistic just fine.
Hmm.
Not sure if it matters, but I'd note logistic curves can be hard to distinguish from an exponential for long enough that the difference isn't always very consequential — a nuke exploding doesn't keep doubling in power every few microseconds forever, but for enough doubling periods that cities still get flattened.
They actively look for ways for infinity to happen. Look at Eli's irate response to Roko's basilisk. To him even being able to imagine that there is a trap means that it will necessarily be realised.
I've seen "rationalist" AI doomers who say things like "given enough time technology will be invented to teleport you into the future where you'll be horifically tortured forever".
It's just extrapolation, taken to the extreme, and believed in totally religiously.
> Humanity long time ago decided that infinity * 0 = 0 for very good practical reasons.
Among them being that ∞ × 0 = ∞ makes no mathematical sense. Multiplying literally any other number by zero results in zero. I see no reason to believe that infinity (positive or negative) would be some exception; infinity instances of nothing is still nothing.
The problem is that infinity is neither a real nor a complex number, nor an element of any algebraic field, and the proposition that "x * 0 = 0" only holds if x is an element of some algebraic field. It is a theorem that depends on the field axioms.
The real numbers can be extended to include two special elements ∞ and -∞, but this extension does not constitute a field, and the range of expressions in which these symbols make sense is very strictly and narrowly defined (see Rudin's PMA, Definition 1.23):
(a) If x is real then
x + ∞ = +∞, x - ∞ = -∞, x / +∞ = x / -∞ = 0.
(b) If x > 0 then x * (+∞) = +∞, x * (-∞) = -∞.
(c) If x < 0 then x * (+∞) = -∞, x * (-∞) = +∞.
The extended real number system is most commonly used when dealing with limits of sequences, where you may also see such symbols appear:
3.15 Definition Let {sₙ} be a sequence of real numbers with the following property: For every real M there is an integer N such that n ≥ N implies sₙ ≥ M. We then write
sₙ ⟶ +∞.
In no other contexts do the symbols ∞ and -∞ make any sense. They only make sense according to the definitions given.
It's usually the case that when you see people discussing infinity that they are actually talking about sequences of numbers that are unbounded above (or below). The expression "sₙ ⟶ +∞" is meant to denote such a sequence, and the definitions that extend the real number line (as in Definition 1.23 above) are used to do some higher-level algebra on limits of sums and products of sequences (e.g. the limit of sₙ + tₙ as n becomes "very large" for two sequences {sₙ}, {tₙ}) to shortcut around the lower-level formalisms of epsilons and neighborhoods of limit points in some metric space, which is how the limits of sequences are rigorously defined.
In no case do the symbols ∞ and -∞ refer to actual numbers. They are used in expressions that refer to properties of certain sequences once you look far enough down the sequence, past its first, second, hundredth, umpteenth, "Nth" terms, and so on.
Thus when you see people informally and loosely use expressions such as "infinity times zero" they're not actually multiplying two numbers together, but rather talking about the behavior of the product of two sequences as you evaluate terms further down both sequences; one of which is unbounded, while the other can be brought arbitrarily close to (but not necessarily equal to) zero. You will notice that no conclusions can be drawn regarding the behavior of such a product in general, whether referencing the definitions comprising the extended real number system or the lower-level definitions in terms of epsilons and neighborhoods of limit points.
So much confusion today comes down to people confidently using words, symbols, and signs they don't understand the definitions nor meanings of. Sometimes I wonder if this is the real esoteric meaning of the ancient Tower of Babel mythos.
Infinity doesn't need to be in some "algebraic field" for it to be patently true that an infinite amount of nothing is still nothing, and that adding zero to itself over and over again for an infinitely long time will never give you a result other than zero. It's only impossible to define if you overthink it, and/or maintain a needlessly narrow definition of what a "number" is.
Or, if you really insist on speaking in mathematician-ese, an infinite series of zero is zero, and a zero-bounded summation is zero regardless of the summand:
x · y ≡ Σ(y, i = 1) x = y times { x + x + … + x } ≡ Σ(x,i=1) y = x times { y + y + … + y }
julia> i = 0
0
julia> while true
println(i)
global i += 0
end
0
0
0
0
0
0
0
0
0
...and on and on until the heat death of the universe or you hit Ctrl-C.
Either way, seems pretty straightforward to define if you have a clear definition of what multiplication is in the first place (and what either zero or infinite iterations of that definition will produce).
Ok, let's assume you are correct and that ∞ · 0 = 0. Consider then the two sequences sₙ = n, tₙ = 1/n.
By Definition 3.15 as provided in my last post, sₙ ⟶ +∞, and you will have to take it for granted that tₙ ⟶ 0 [0]. Intuitively we can see that the terms of {sₙ} are 1, 2, 3, ... tending to +∞; for {tₙ} we have 1, 1/2, 1/3, ... tending to zero, for progressively larger values of n.
Now I ask what happens if we multiply the "infinite'th" terms of both sequences together. The first few terms of this product would be 1 · 1, 2 · 1/2, 3 · 1/3, and so on; I ask what the value x is in the limit sₙ · tₙ ⟶ x as we evaluate further and further "nth" terms of both sequences.
You may have observed from the first three terms evaluated that sₙ · tₙ = n(1/n) = 1. Thus, as we continue to increase the value of n, it's always the case that sₙ · tₙ ⟶ 1 and the product tends to 1, because the product is constant and irrespective of n; we've "cancelled it out."
The limit of the product is the product of the limits [1]; that is, sₙ · tₙ ⟶ +∞ · 0, as we first established that sₙ ⟶ +∞, tₙ ⟶ 0.
If we thus take your supposition that +∞ · 0 = 0 for granted, we obtain sₙ · tₙ ⟶ 0, which contradicts our previous result that sₙ · tₙ ⟶ 1.
Thus we can either dispense with the cited established theorems of analysis used to deduce that sₙ · tₙ ⟶ 1, or conclude that the supposition +∞ · 0 = 0 must be false.
It might be the case that Σ(∞, i = 1) 0 = 0, but you can't extend this to conclude +∞ · 0 = 0 in general. Lots of intuitions from informal mathematics and even calculus start to break down once you examine the lower-level "machine code" of proof and analysis, especially once you start talking about concepts like infinity.
> Now I ask what happens if we multiply the "infinite'th" terms of both sequences together.
In that case, you would've reached their respective limits, and you're back to adding one of those limits into itself an other-limit number of times. If sₙ · tₙ ⟶ 1, then that only holds true if tₙ hasn't actually reached 0.
> Thus we can either dispense with the cited established theorems of analysis used to deduce that sₙ · tₙ ⟶ 1
You don't need to do that. You just need to accept that zero is just as much of a mathematical special case as infinity - unsurprisingly, since it's the inverse of infinity and vice versa.
> It might be the case that Σ(∞, i = 1) 0 = 0, but you can't extend this to conclude +∞ · 0 = 0 in general.
Sure you can, unless you've got some other definition of multiplication that's impossible to express as self-summation.
Even if you go with the alternative definition of multiplication as a scaling operation (wherein you're computing m × n by taking the slope from (x=0,y=0) to (x=1,y=m) and then looking up y where x=n), if m is zero then the line being drawn never stops being vertical, and if n is zero then you never leave (0,0) in the first place. Doesn't matter if the other factor is infinitely far in either the x or y axis; you're still ending up with zero no matter how hard you try and fight it.
> Lots of intuitions from informal mathematics and even calculus start to break down once you examine the lower-level "machine code" of proof and analysis, especially once you start talking about concepts like infinity.
Sure, but in this case, it's the intuition that multiplying something by its inverse (a.k.a. dividing something by itself) is always 1 that breaks down, not the above-verifiable and inescapable fact that multiplying something by zero is always zero. 0 ÷ 0 = n looks like it should correct for any value of n (incl. n = 1), since multiplying both sides by zero to eliminate that divide-by-zero will always produce a correct equation, but since m ÷ n ≡ m × (1/n), if m is zero then anything on the RHS must be zero, because of that inescapable nature of nothingness - thus, 0 ÷ 0 = 0 × (1/0) = 0, with all other possible alternatives having been rendered impossible.
> you're back to adding one of those limits into itself an other-limit number of times.
> some other definition of multiplication that's impossible to express as self-summation.
Ok. What happens if I multiply a number by pi? What does it mean to add something to itself, pi times?
> If sₙ · tₙ ⟶ 1, then that only holds true if tₙ hasn't actually reached 0.
I mean... it is in fact the case that tₙ never actually reaches zero; otherwise, if 1/n = 0 for some n, then by multiplying both sides by n we obtain 1 = 0.
What's meant by tₙ ⟶ 0 is that any neighborhood centered about 0 of any radius (call the radius "epsilon") always contains at least one point from the sequence {tₙ}.
To hammer the point that sₙ · tₙ ⟶ 1 home, and since you are fond of using a computer to perform arithmetic (note: not prove mathematical statements), here's what computers have to say about the limit of n · (1/n): https://www.wolframalpha.com/input?i=limit+as+n-%3Einfinity+...
> You just need to accept that zero is just as much of a mathematical special case as infinity - unsurprisingly, since it's the inverse of infinity and vice versa.
> the inverse of infinity
You again throw around words like "inverse" whose meaning you don't understand. Do you mean a multiplicative inverse, where a number and its multiplicative inverse yield the multiplicative identity, in which case +∞ · 0 = 1? Or an additive inverse that yields the additive identity, in which case +∞ + 0 = 0? Or some other pseudomathematical definition of "inverse" pulled out of a hat, like your definitions of +∞ · 0?
> if m is zero then the line being drawn never stops being vertical
Drawing pictures is different from putting together a formal, airtight proof in first-order logic that can be (in principle) machine-verified. Maybe I'll make an exception for compass-and-straightedge proofs, but that's not what you're presenting here.
Rudin was published in 1953, there are probably very good reasons for why this text has withstood refutation for over 70 years. Maybe you can rise to the task; publish a paper with your novel number system in which +∞ · 0 = 0 and 0 ÷ 0 = 0 and wait for your Fields Medal in the mail. Maybe you can collaborate with Terrence Howard and get a spot on Joe Rogan.
> Ok. What happens if I multiply a number by pi? What does it mean to add something to itself, pi times?
You add it to itself 3 times, then shift the decimal point and repeat with 1, then shift the decimal point and repeat with 4, and so on with each digit of π. 1 × π = 1 + 1 + 1 + 0.1 + 0.01 + 0.01 + 0.01 + 0.01 + 0.001 + 0.0001 + 0.0001 + 0.0001 + 0.0001 + 0.0001 and so on forever.
> To hammer the point that sₙ · tₙ ⟶ 1 home
That point doesn't need hammered. sₙ · tₙ ⟶ 1 can absolutely be true when you haven't yet reached zero. That doesn't mean it's true in the event that you do indeed manage to reach zero. It indeed can't be true in the event that you do indeed reach zero, because n × 0 = 0 for all values of n.
> Do you mean a multiplicative inverse, where a number and its multiplicative inverse yield the multiplicative identity, in which case +∞ · 0 = 1?
You obviously already know that's what I meant, since that's exactly what I described further down - including how ∞ × 0 ≠ 1 because the multiplicative identity breaks down when one of the factors is zero, specifically because having zero of something will always produce zero no matter what that something is.
> Or some other pseudomathematical definition of "inverse" pulled out of a hat, like your definitions of +∞ · 0?
If you're seriously calling multiplication-as-summation pseudomathematics, then you're in no position to assess whether or not I "don't understand" the meanings of words.
I've been nothing but civil toward you, and you've been nothing but condescending toward me. That normally wouldn't be a problem (condescension is par for the course on the Internet), but if you're going to be condescending, the least you can do is not be blatantly wrong in the process.
> Drawing pictures is different from putting together a formal, airtight proof in first-order logic that can be (in principle) machine-verified. Maybe I'll make an exception for compass-and-straightedge proofs, but that's not what you're presenting here.
That's exactly what I'm presenting here (since apparently you believe adding numbers together is a spook). You don't even need a concept of numbers to see plain as day that any multiplication wherein one of the factors is zero will always be zero.
> Rudin was published in 1953, there are probably very good reasons for why this text has withstood refutation for over 70 years. Maybe you can rise to the task; publish a paper with your novel number system in which +∞ · 0 = 0 and 0 ÷ 0 = 0 and wait for your Fields Medal in the mail. Maybe you can collaborate with Terrence Howard and get a spot on Joe Rogan.
You know what? Maybe I will. And I'm willing to bet you'll find some other pedantic reason to be a condescending prick when that happens.
Last word's yours if you want it. I have better things to do than argue with people engaging in bad faith.
There's no need for me to continue engaging you with formal mathematical arguments when you reply with the mathematical equivalent of climate change denialism or vaccine conspiracy theory and uneducated statements that are "not even wrong" [0], so instead I will just refer you to expert opinions on the topic; though at this point I doubt that your level of mathematical literacy is sufficient to understand any of this subject matter.
I'm interested in #4, is there anywhere you know of to read more about that? I don't think I've seen that described except obliquely in eg sayings about the relationship between genius and madness.
I don't, that one's me speaking from my own speculation. It's a working model I've had for a while about the nature of a lot of kinds of mental illness (particularly my own tendencies towards depression), which I guess I should explain more thoroughly! This gets a bit abstract, so stick with me: it's a toy model, and I don't mean it to be definitive truth, but it seems to do well at explaining my own tendencies.
-------
So, toy model: imagine the brain has a single 1-dimensional happiness value that changes over time. You can be +3 happy or -2 unhappy, that kind of thing. Everyone knows when you're very happy you tend to come down, and when you're very sad you tend to eventually shake it off, meaning that there is something of a tendency towards a moderate value or a set-point of sorts. For the sake of simplicity, let's say a normal person has a set point of 0, then maybe a depressive person has a set point of -1, a manic person has a set point of +1, that sort of thing.
Mathematically, this is similar to the equations that describe a spring. If left to its own devices, a spring will tend to its equilibrium value, either exponentially (if overdamped) or with some oscillation around it (if underdamped). But if you're a person living your life, there are things constantly jostling the spring up and down, which is why manic people aren't crazy all the time and depressed people have some good days where they feel good and can smile. Mathematically, this is a spring with a forcing function - as though it's sitting on a rough train ride that is constantly applying "random" forces to it. Rather than x'' + kx = 0, you've got x'' + kx = f(t) for some external forcing function f(t), where f(t) critically does not depend on x or on the individual internal dynamics involved.
These external forcing functions tend to be pretty similar among people of a comparable environment. But the internal equilibria seem to be quite different. So when the external forcing is strong, it tends to pull people in similar directions, and people whose innate tendencies are extreme tend to get pulled along with the majority anyway. But when external forcing is weak (or when people are decoupled from its effects on them), internal equilibria tend to take over, and extreme people can get caught in feedback loops.
If you're a little more ML-inclined, you can think about external influences like a temperature term in an ML model. If your personal "model" of the world tends to settle into a minimum labeled "completely crazy" or "severely depressed" or the like, a high "temperature" can help jostle you out of that minimum even if your tendencies always move in that direction.
Basically, I think weird nerds tend to have low "temperature" values, and tend to settle into their own internal equilibria, whether those are good, bad, or good in some cases and bad in others (consider all the genius mathematicians who were also nuts). "Normies", for lack of a better way of putting it, tend to have high temperature values and live their lives across a wider region of state space, which reduces their ability to wield precision and competitive advantage but protects them from the most extreme failure-modes as well.
>These external forcing functions tend to be pretty similar among people of a comparable environment. But the internal equilibria seem to be quite different. So when the external forcing is strong, it tends to pull people in similar directions, and people whose innate tendencies are extreme tend to get pulled along with the majority anyway. But when external forcing is weak (or when people are decoupled from its effects on them), internal equilibria tend to take over, and extreme people can get caught in feedback loops.
Yeah, this makes sense, an isolated group can sort of lose the "grounding" of interacting with the rest of society and start floating off in whatever direction, as long as they never get regrounded. When you say feedback loops, do you mean obsessive tendencies tending to cause them to focus on and amplify a small set of thoughts/beliefs, or something else?
I like the ML/temperature analogy, it's always interesting watching kids and thinking in that vein, with some kids at a super high temp exploring the search space of possibilities super quickly and making tons of mistakes, and others who are much more careful. Interesting point on nerds maybe having lower temp/converging more strongly/consistently on a single answer. And I guess artist types would be sort of the opposite on that axis?
A lot of rationalists that go deep are on the autistic spectrum. Their feedback loops are often classic autistic thought traps of people who end up "committing to the bit". Add anxiety to it and you get autistic style rumination loops that go nuts.
Edit: Funny enough, when I wrote "autistic thought traps", I thought I just made it up to describe something, but it is common terminology. An AI summary of what they are:
Autistic people may experience thought traps, which are unhelpful patterns of thinking that can lead to anxiety and stress. These traps can include catastrophizing, all-or-nothing thinking, and perseverative cognition.
Catastrophizing:
Jumping to the worst-case scenario;
Imagining unlikely or improbable scenarios;
Focusing on negative aspects of a situation;
Having difficulty letting go of negative thoughts;
All-or-nothing thinking:
Categorizing people or things as entirely good or bad;
Having a tendency to think in black and white;
A lot of people say this, but I think it's the wrong word in an important way.
I'll give you P(autistic|rationalist) > P(autistic), but beware the base rate fallacy. My guess is you're focussing on a proxy variable.
To show some important counter-examples: Temple Grandin, famously autistic and a lot of people's idea what autism means - not a rationalist in the sense you mean. Scott Alexander - fairly central example of the rationalist community, but not autistic (he's a psychiatrist so I trust him on that).
EDIT: also P(trans|rationalist) > P(trans), but P(rationalist|trans) I'd say is fairly small. Base rate fallacy and something something Bayes. Identifying these two groups would definitely be a mistake.
I think when you look at these types of groups that become cult or cult-like, they often appeal to a specific experience or need that people have. My guess is the message that many trans and autistic people take away is fulfilling for them. Many people in these communities share similar traumas and challenges that affect them deeply. It makes them vulnerable to manipulation and becoming true believers capable of more extreme behavior.
The more common pattern is a false prophet cult where the influential leader is a paternal figure bringing enlightenment to the flock. It just so happens that free labor and sex with pretty girls are key aspects of that journey.
It doesn't mean that "all X are Y" or "Y's are usually X".
There's another way around it. People that see themselves as "freethinkers" are also ultimately contrarians. Taking contrarianism as part of your identity makes people value unconventional ideas, but turn that around: It also means devaluing mainstream ideas. Since humanity is basically an optimization algorithm, being very contrarian means that, along with throwing away some bad assumptions, one also throws away a whole lot of very good defaults. So one might be right in a topic or two, but overall, a lot of bad takes are going to seep in and poison the intellectual well.
You don't have to adopt the ideas of every fringe or contrarian viewpoint you come across to be a freethinker; you simply have to be willing to consider and evaluate those views with the same level of rigor you give to mainstream views. Most people who do that will probably adopt a handful of fringe beliefs but, for the most part, retain a very large number of conventional beliefs too. Julia Galef is kind of an archetypal rationalist/free thinker and she has spoken about the merits of traditional ideas from within a rationalist framework.
I mean, isn't the problem that they actually aren't that smart or rational. They're just a group of people who've built their identity around believing themselves to be smart...
They're also not freethinkers. They're a community that demand huge adherence to their own norms.
Great summary, and you can add utilitarianism to the bucket of ideologies that are just too rigid to fully explain the world and too rational for human brains not to create a misguided cult around
Ok but social clustering is how humans work. Culture translated to modern idiomatic language is “practice of a cult”. Ure translates to “practice of”, Ur being the first city so say historians; clusters of shared culture is our lived experience. Forever now there have been a statistical few who get stuck in a while loop “while alive recite this honorific code, kill perceived threats to memorized honorific chants”.
We’ve observed ourselves do this for centuries. Are your descriptions all that insightful?
How do you solve isolation? Can you? Will thermodynamics allow it? Or are we just neglecting a different cohort?
Again due to memory or social systems are always brittle. Everyone chafes over social evolution of some kind, no matter how brave a face they project in platitudes, biology self selects. So long as the economy prefers low skilled rhetoricians holding assets, an inflexible workforce constrains our ability to flex. Why is there not an “office worker” culture issue? Plainly self selecting for IT to avoid holding the mirror up to itself.
Growing up in farmland before earning to STEM degrees, working on hardware and software, I totally get the outrage of people breaking their ass to grow food while some general studies grad manages Google accounts and plays PS5 all night. Extreme addiction to a lived experience is the American way from top to bottom.
Grammatically correct analysis of someone else. But this all gets very 1984 feeling; trust posts online, ignore lived experience. It’s not hard to see your post as an algebraic problem; the issues of meatspace impact everyone regardless of the syntax sugar analysis we pad the explanation with. How do you solve for the endless churn of physics?
It's equally fascinating to see how effectively these issues are rapidly retconned out of the rationalist discourse. Many of these leaders and organizations who get outed were respected and frequently discussed prior to the revelations, but afterward they're discussed as an inconsequential sideshow.
> TLDR: isolation, very strong in-group defenses, logical "doctrine" that is formally valid and leaks in hard-to-notice ways, apocalyptic utility-scale, and being a very appealing environment for the kind of person who goes super nuts -> pretty much perfect conditions for a cult.
I still think cults are a rare outcome. More often, I've seen people become "rationalist" because it gives them tools to amplify their pre-existing beliefs (#4 in your list). They link up with other like-minded people in similar rationalist communities which further strengthens their belief that they are not only correct, but they are systematically more correct than anyone who disagrees with them.
> They have, in the past, responded to criticism with statements to the effect of "anyone who would criticize us for any reason is a bad person who is lying to cause us harm". That kind of framing can't help but get culty.
I have never seen this and I've been active around this around for almost two decades now.
> isolation
Also very much doesn't match my experience. Only about a quarter of my friends are even rationalists.
I disagree. It's common for any criticisms of rationalism or the rationalist community to be dismissed as having ulterior motives. Even the definition of rationalism is set up in a way that it is de facto good, and therefore anyone suggesting anything negative is either wrong or doesn't know what they're talking about.
Maybe so! They didn't kick me out. I chose to leave c. early 2021, because I didn't like what I saw (and events since then have, I feel, proven me very right to have been worried).
This is a very insightful comment. As someone who was 3rd-degree connected to that world during my time in the bay, this matches the general vibe of conversations and people I ran into at house parties and hangouts very very well.
It's amazing how powerful isolation followed by acceptance is at modifying human behavior.
i see 2 - superiority complex and lack of such an "irrational" thing like empathy. Basically they use crude logical-like looking constructions to excuse their own narcissism and related indulgences.
>The problem with rationalists/EA as a group has never been the rationality, but the people practicing it and the cultural norms they endorse as a community
It's precisely those kind of people though that would ever be so deluded and so little self conscious as to start a group about rationality - and declare themselves its arbiters.
The problem with rationalists/EA as a group has never been the rationality, but the people practicing it and the cultural norms they endorse as a community.
As relevant here:
1) While following logical threads to their conclusions is a useful exercise, each logical step often involves some degree of rounding or unknown-unknowns. A -> B and B -> C means A -> C in a formal sense, but A -almostcertainly-> B and B -almostcertainly-> C does not mean A -almostcertainly-> C. Rationalists, by tending to overly formalist approaches, tend to lose the thread of the messiness of the real world and follow these lossy implications as though they are lossless. That leads to...
2) Precision errors in utility calculations that are numerically-unstable. Any small chance of harm times infinity equals infinity. This framing shows up a lot in the context of AI risk, but it works in other settings too: infinity times a speck of dust in your eye >>> 1 times murder, so murder is "justified" to prevent a speck of dust in the eye of eternity. When the thing you're trying to create is infinitely good or the thing you're trying to prevent is infinitely bad, anything is justified to bring it about/prevent it respectively.
3) Its leadership - or some of it, anyway - is extremely egotistical and borderline cult-like to begin with. I think even people who like e.g. Eliezer would agree that he is not a humble man by any stretch of the imagination (the guy makes Neil deGrasse Tyson look like a monk). They have, in the past, responded to criticism with statements to the effect of "anyone who would criticize us for any reason is a bad person who is lying to cause us harm". That kind of framing can't help but get culty.
4) The nature of being a "freethinker" is that you're at the mercy of your own neural circuitry. If there is a feedback loop in your brain, you'll get stuck in it, because there's no external "drag" or forcing functions to pull you back to reality. That can lead you to be a genius who sees what others cannot. It can also lead you into schizophrenia really easily. So you've got a culty environment that is particularly susceptible to internally-consistent madness, and finally:
5) It's a bunch of very weird people who have nowhere else they feel at home. I totally get this. I'd never felt like I was in a room with people so like me, and ripping myself away from that world was not easy. (There's some folks down the thread wondering why trans people are overrepresented in this particular group: well, take your standard weird nerd, and then make two-thirds of the world hate your guts more than anything else, you might be pretty vulnerable to whoever will give you the time of day, too.)
TLDR: isolation, very strong in-group defenses, logical "doctrine" that is formally valid and leaks in hard-to-notice ways, apocalyptic utility-scale, and being a very appealing environment for the kind of person who goes super nuts -> pretty much perfect conditions for a cult. Or multiple cults, really. Ziz's group is only one of several.