I think this is probably the source of the whole debacle right here... Sam is pretty self righteous and self important and just seems to lack some subtle piece of social awareness, and I imagine that turns a lot of people off. That delusional optimism is probably the key to his financial success, too, in terms of having a propensity for taking risk.
Leadership at companies everywhere act just like this without it resulting in quite the same levels of drama seen in this case. I'm not sure I buy the correlation.
Agreed. I think one of the biggest questions on a lot of A.I. Safety people's minds now is whether Sam's optimism includes techno-optimism. In particular, people on twitter are speculating about whether Sam is, at heart, an e/acc, which is a new term that means Effective Accelerationism. Its use originally started as a semi-humorous dig at E.A. (Effective Altruism) but has started to pick up steam as a major philosophy in its own right.
The e/acc movement has adherents among OpenAI team members and backers, for example:
At a very high level, e/acc's are techno-utopians who believe that the benefits of accelerating technological progress outweigh the risks, and that there is in fact a moral obligation to accelerate tech as quickly as possible in order to maximize the amount of sentience in the universe.
A lot of people, myself included, are concerned that many e/acc's (including the movement's founder, that twitter account BasedBeffJezos), have indicated that they would be willing to accelerate humanity's extinction if this results in the creation of a sentient AI. Discussed here:
''Really important to note that a lot of e/acc people consider it to be basically unimportant or even desirable if AI causes human extinction, that faction of them does not value human life. If you hear "next stage of intelligence", "bio bootloader", "machine god" said in an approving rather than horrified manner, that's probably what they believe. Some of them have even gone straight from "Yes, AGI is gonna happen and it's good that humans will be succeeded by a superior lifeform, because humans are bad" to "No, AGI can't happen, there's no need to engage in any sort of safety restrictions on AI whatsoever, everyone should have an AGI", apparently in an attempt to moderate their public views without changing the substance of what they're arguing for.''
Sentient AI driven extinction is absolute fiction at the current state of the art. We don’t know what sentience is and are unable to approach such facets of our cognition such as how qualia emerge with any level of precision.
”What if we write a computer virus that deletes all life” is a good question as you can approach from engineering feasibility perspective.
”What if someone creates a sentient AI” is not reasonable fear. At current state of the art. It’s like fearing Jaquard looms in the 19th century because someone could use them for ”something bad”. Yes - computers eventually facilitated nuclear bombs. But also lots of good stuff.
I’m not saying we can create ’sentient program’ one day. But currently we can’t quantify what sentience is. I don’t think there is any engineering capability to conclude that advanced chatbots called LLM:s, despite how amazing they are, will reach godhood anytime soon.
Sometimes it’s helpful to take a break from Twitter.
I know the hype algorithms have tech folks convinced they’re locked in a self important battle over the entirety of human destiny.
My guess is we’re going to look back on this in 10 years and it’s all going to be super cringe.
I hate to throw cold water on the party…we’re still talking about a better autocomplete here. And the slippery slope is called a logical fallacy for a reason.
Re: user upwardbound and your now deleted comment on extinction:
Not all e/acc accept extinction. Extinction may and very well could happen at the hands of humans, with the potentially pending sudden ice age we're about to hit, or boiling atmosphere, or nuclear holocaust etc. What we believe is that to halt AI will do more harm than good. Every day you roll the dice, and with AGI, the upsides are worth the roll of the dice. Many of us, including Marc Andreessen, are e/acc and are normal people. Let's not paint us as nutcases please.