Hacker News new | past | comments | ask | show | jobs | submit login

I need to resist all this regulatory resistance here, spoken amongst tech people.

No, Yann. I am FULLY in support of drastic measures to mitigate and control AI research for the time being. I have no vested stake in any of the companies. I lived for a year in a building that hosted several AI events per week. I'm not ignorant

This is the only territory where humanity is mounting a conversation about a REAL response that is appropriately cautious. This x-risk convo (again, appropriately CAUTIOUS) and our rapid response to Covid are the only things that makes me hopeful that humanity is even capable of not obliterating itself.

And I'd say the same thing EVEN IF this AI x-risk thing could later be 100% proven to be UNFOUNDED.

Humanity has so very little demonstrated skills of pumping the brakes as a collective, and even a simple exercise of "can we do this" is valuable. This is sorely needed, and I'm glad for it




The issue is not that caution is warranted or not, it's that it's cynically used by Sam Altman to increase the value of his company.

I don't trust a silicon valley rich guy with this more than I trust anybody else. Why should he sit and decide what the rest of us can't do, while he's getting richer? That is what the article is about..


Getting angry about one guy getting rich is silly in comparison to the serious threats presented by sudden AGI. Did all the other guys in Silicon Valley (or Wall Street, or Monaco, or Dubai) get rich in a justifiable fair way?

I agree that big companies/capitalists using their power to suppress the rest of us sucks, but that’s a systemic political battle that should be considered separately here.


It's a systemic political battle, and the only reason anyone's talking about banning AI. They're only luddites when it benefits them, personally, in the here and now. Parroting their silly fearmongering tactics "What if THE MATRIX, guys?!" is just giving them more power.

Ludditry is just cowardice, specifically focused on the fear of change. Want to know a realistic worse case scenario for AI development? The same rich assholes that are rich today invest in developing AI tech to the point that the market value for human labor plummets to an unlivable point, and class mobility subsequently goes to zero. THAT's a doomsday scenario, and it's made ever more likely by stupid claims that only our Silicon Valley Immortal Philosopher-Kings are worthy of control over AI.


The most likely outcome of this exercise in pumping the brakes is the monopolization of high-end AI by a small number of very rich or powerful people. They will then be able to leverage it as an intelligence amplifier for themselves while renting out access to inferior versions of its capabilities to the rest of humanity and becoming even more extraordinarily wealthy.

If AI really is going to take off as much as some suspect it will, then you need to wrap your head around the sheer magnitude of the power grab we could be witnessing here.

The most plausible of the "foom" scenarios is one in which a tiny group of humans leverage superintelligence to become living gods. I consider "foom" very speculative, but this version is plausible because we already know humans will act like that while AI so far has no self-awareness or independence to speak of.

I don't think most of the actual scientists concerned about AI want that to happen, but that's what will happen in a regulatory regime. It is always okay for the right people to do whatever they want. Regulations will apply to the poors and the politically unconnected. AI will continue to advance. It will just be monopolized. If some kind of "foom" scenario is possible, it will still happen and will be able to start operating from a place of incredible privilege it inherits from the privileged people who created it.

There is zero chance of an actual "stop."


> The most plausible of the "foom" scenarios is one in which a tiny group of humans leverage superintelligence to become living gods. I consider "foom" very speculative, but this version is plausible because we already know humans will act like that while AI so far has no self-awareness or independence to speak of.

One of my hopes is that "superintelligence" turns out to be an impossible geekish fantasy. It seems plausible, because the most intelligent people are frequently not actually very successful.

But if it is possible, I think a full-scale nuclear war might be a preferable outcome to living with such "living gods."


I do wonder how long this bias towards the status quo will survive human history. This is the first big shakeup (what if we’re not the smartest beings?), and I think the next is prolly gonna be long lives (what will the shape of my life be?). From there it’s full on sci-fi speculation, but I definitely am putting money that humans will incrementally trade comfort and progress for… I guess existential stability.


I fully recognize the danger AI presents, and believe that it will mostly likely end up being terrible for humanity.

But thanks to my own internal analysis ability and the anonymity of the internet, I am also willing to speak candidly. And I think I speak for many people in the tech community, whether they realize it or not. So here we go:

My objective judgement of the situation is heavily adulterated by my incredible desire for a fully fledged hyper intelligent AI. I so badly want to see this realized that my brain's base level take on the situation is "Don't worry about the consequences, just think about how incredibly fucking cool it would be."

Outwardly I wouldn't say this, but it is my gut feeling/desire. I think for many people, especially those who have pursued AI development as their life's work, how can you spend your life working to get to the garden of Eden, and then not eat the fruit? Even just a taste.


Exactly! You hit on something I have been thinking about for a long time, but phrased better than I could have. We need to start saying this out loud more and more.

There is a problem that lies above the development of AI or advanced technology. It is the zeitgeist that caused humanity to end up in this situation to begin with, questioning the effects of AI in the future. It's a product of humans surrendering to a neverending primal urge at all costs. Advancing technology is just the means by which the urge is satiated.

I believe the only way we can survive this is if we can suppress that urge for our own self-preservation. But I don't think it's feasible at this stage. We may have to begin questioning the merit of parts of the human condition and society very soon.

Given the choice, I think a lot of people today would love to play God if only the technology was in their hands right this minute. Where does that urge arise from? It deserves to be put in the spotlight.


> how can you spend your life working to get to the garden of Eden, and then not eat the fruit? Even just a taste.

Because it's insect-brain level behavior to surrender in faith to some abstract achievement without understanding its viability or actual desirability.


They understand the desirability - they’ve been thinking about it for decades. AGI would be massive unexpected help in alleviating poverty, exploring the stars, advancing science, expanding access to quality medical & educational resources, etc etc etc. All of this on top of the more fundamental “IT LIVES!!” primal feeling of accomplishment and power, which I think isn’t something we can dismiss as irrational/“insect-level” offhand. What makes that feeling less virtuous than human yearning for exploration, expression, etc?


Maybe that is the crux of the problem. Human yearning is dual-use.


We need more people to say this out loud, so thank you…


Do you regret choosing AI development as a career?


I don't even work in tech much less AI development.

It's just something I have followed closely over the years and tinkered with as it is so fascinating. At it's core I know my passion is driven by a desire to witness/interact with a hyper-intelligence.


These doomers saying there is risk of death is like saying there is risk of death if you are exposed to the outside world. Of course there is risk, but you still have to go out because of risk vs reward. Same with AI the risk is always death, but the rewards are way bigger. Everybody who has unnaturally low risk tolerance is showing up in these conversations.


Isn't that a fully-general argument for ignoring all risks or at least all risks necessary to try for any kind of reward?


On one hand, we have some pie-in-the-sky, pop-culture-inspired "risk".

On the other, we have the ugliness of human nature, demonstrated repeatedly countless times over millennia.

These are people with immense resources, hell-bent on acquiring even more riches and power. Like always in history, every single time. They're manipulating the discourse to distract from the real present danger (them) and shift the focus to some imaginary danger (oh no, the Terminator is coming).

The Terminators of most works of fiction were cooked up in govt / big corp labs, precisely by the likes of Altman and co. It's _always_ some billionaire villain or faceless org that the scrappy rebels are fighting against.

You want to protect the future of humanity from AI? Focus on the human incumbents, they're the bad actors in this scenario. They've always been.


You’re biased towards post-cold-war fiction, IMO. I’m a cultural Luddite so have no sources, but I imagine there’s lots of stories where the Russians/Chinese/Germans/Triad/Mafia/Illuminati/Masons/etc did it, aka some creepy unseen “other” that is watching us and waiting to strike. I think a lot of people are taking this same view today — perhaps with more PC language


I resist regulation that fails to account for all parties.

Regulation for AI must be engineered within the context of the mechanisms of AI in the language of machines, publicly available as "code as law". It shouldn't necessarily even be human readable, in the traditional sense. Our common language is insufficient to express or interpret the precision of any machine, and if we're already acknowledging its "intelligence", why should we write the laws for only ourselves?

Accepting that, AI is only our latest "invisible bomb that must be controlled by the right hands". Arguably, the greatest mistake of atomic research was that scientists ran to the government with their new discoveries, who misused them for war.

If AI is anticipated be used as an armament, should it be available to bodies with a monopoly on force, or should we start with the framework to collectively dismantle any institution that automates "harm"?

War will be no more challenging to perform than a video game if AI is applied to it. All of this is small, very fake potatoes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: