Hacker News new | past | comments | ask | show | jobs | submit login

The point is whether we should be complacent and dismiss concerns just because we don't know when it will happen.

Once someone builds it, stopping it might be very difficult. Here's why: https://youtu.be/4l7Is6vOAOA (less than 9 minutes and very clearly explained).

Dismissing even a 10% chance of possible catastrophic risks is not what we practice in any other domains. Would you dismiss a concern over airplanes that have a 10% or even just a 1% chance of crashing?




I guess, as a theoretical computer scientist myself, my first reaction is to roll my eyes. I get that these guys are trying to drum up interest with investors/funding agencies, but "My greatest fear is that my research is too successful" is still pretty grandiose. It sounds like those physicists who thought they had figured everything out in the '30s (or AI researchers before the last winter). I think it's pretty telling that Zuckerberg isn't worried despite having an empire built on AI - he's not asking anyone for money.


> physicists who thought they had figured everything out in the '30s

Certain 30s physicists were making "My greatest fear is that my research is too successful" warnings that turned out to be pretty on-point.


Zuckerberg just wants to use ANI to better push ads at you. Any fear of AI will make it harder for him to do just that. This is all about mone and earning more of it.


I agree that Elon and Mark are likely to say things that they think will win them support.

I think the best way to understand AI tech is to try it out yourself. The closer you are to understanding its internals, the more informed you'll be. Fundamentally, it's currently all statistics. It's up to you, then, to decide whether humans are just statistical machines, or if we have something more that may not be understood by science yet, such as free will.


Point is, nobody knows whether AGI really exists or not, based on our current approach. As I have answered in another comment section, natural language understanding haven't been cracked, at all. I mean at all.

So the reality is, it is like ancient human knows birds can fly with wings, but people can only fly in their dreams. Now, we know people can think intelligently with their brain, but no one knows how to make computer works the same. It is far too early to talk about the risk, until we have the Wright brothers of AI to enlighten us on such possibility.


Unlike airplanes, AGI possesses agency and the ability to cause widespread harm and damage in today's computer-penetrated world. So if we wait until it is invented, there is no guarantee that there won't be danger or even catastrophes. Wouldn't you even agree that there is a 1% chance that AGI is possible and that it could harm us once invented?

Also, the goalpost for identifying something as a challenging cognitive skill worthy of the name AI is moved almost every time we make progress so people keep denying that we are closer to AGI. AlphaGo is the latest example.

Some AI researchers and CS people believe that Natural Language Understanding (NLU) is AI-complete, i.e. once we solve it we basically solve AI in the sense of AGI. I do not personally believe that--There are certain human cognitive skills that are not required for NLU. But I do think that solving NLU does bring us closer to solving AGI.

Let's say someone makes progress on NLU. What would be the minimum level sufficient to convince you that AGI is possible? Why minimum? Because we don't want AGI to be right at our doors before starting to prepare.

* Would getting 80% on Winograd Schema be sufficient?

Other suggestions are welcomed, including by others.

https://en.wikipedia.org/wiki/Winograd_Schema_Challenge


For NLU, to convince me the potential, one important evidence I would be particularly interested, is the demonstration of reasoning. Unlike the current black-box model, I would love to see the model give explanation, along with the answer, what it takes to lead such conclusion, be it a rule listed in the manual or case that happened before.

This itself presents several big challenges: how do we represent the concept of reasoning, in mathematical form? Knowledge required or not? What is the representation of such knowledge?

A truly intelligent machine, should be able to take what human takes, a piece of text, then build its knowledge base from there, then answer a question just like human, then give explanation when asked for it.


> Wouldn't you even agree that there is a 1% chance that AGI is possible and that it could harm us once invented?

There's a small chance aliens could find us, too. Yet, we don't know any more about that than we do about AI.

The most imminent threat to humans is humans. In the nearer future, it seems more likely that someone will use machine learning technology in weaponry to cause harm than create AGI. Or, that humans will undergo a global "cultural revolution" like in China or Cambodia (or ISIS now), killing anyone with an education, under the guise that "machines are evil", yet with the true goal of obtaining power, as humans often do.

Ultimately, people are free to work on what they want. If some people want to work on AGI safety, that's great. Perhaps their goal is to make humanity a little more kind. That's certainly worthy.

For me, I'm more interested in seeing research focused on giving better tools to doctors for diagnoses. Radiology is a great place to apply image recognition systems. We just need some good labeled datasets.

Sometimes I get frustrated by the "AGI is an existential risk" quotes like Musk's because I feel that's a distraction from some really beneficial applications of machine learning that will help save lives. It could cause over-hype, leading to another AI winter, pushing back life-saving advances 15 years, as happened after the 80s.

Personally I don't have a problem with the kinds of things some AGI safety researchers, like Stuart Russel, says about AGI. He isn't saying it will arrive in 2030-2040, like Musk.

Musk has an incentive to capitalize on people's fear of AGI. It attracts investment in his non-profit, which can in turn help him continue building his in-house AI program at Tesla. Primarily I disagree with his conclusions about AI tech, but I'm also weary of his motivations. I think some people view him as altruistic and I just don't see that.

> What would be the minimum level sufficient to convince you that AGI is possible?

A system that can determine its own goals. We don't have anything close to that yet, and I can't see anything short of that being a good indicator that we are close to AGI.


For subgoals, there have been a lot of research results on that, mostly in the symbolic AI tradition.

Even if an AI does not determine its own ultimate goal(s), it could still cause us a lot harm by creating subgoals that do not align with human values, which no one can articulate in full yet.

Overall, the controversy is worth it because the force for progress is strong enough that another AI winter is highly unlikely if useful applications continue to be developed. Without the warnings, research on Provably Safe AI might not have sufficient resources especially talented people working on it.


> the force for progress is strong enough that another AI winter is highly unlikely if useful applications continue to be developed.

Really disagree here. All it takes is a few well positioned people, like those at Nnaisense, to create companies promising AGI and sucking up investment dollars. We all know greed is rampant in finance, and AI tech is a big word in startups/innovation these days that can land big VC dollars. There is corruption/snake oil in tech too, and you'll find the greediest float around the most hyped tech.

I think companies like Nnaisense are the wrong place to invest, and that there is much more practical work to be done to advance humanity.


I see a small controversy in your statements, please help me understand:

> Sometimes I get frustrated by the "AGI is an existential risk" quotes like Musk's because I feel that's a distraction from some really beneficial applications of machine learning that will help save lives.

While I absolutely and adamantly disagree and I can't ever agree that warnings must be dismissed as "distractions", I extract from your statement here that you want us to move forward to a true AI no matter what. But then...

> ...and that there is much more practical work to be done to advance humanity.

Right now AI looks like snake oil selling, you realize that, right? I stand fully behind the statement that we have tons of practical and increasingly urgent issues to solve here in the material Earth.

Unless of course you're implying that the strife for inventing a true AI (=AGI) is a practical work. IMO it isn't. It's better than the medieval alchemy, but not by much.


> While I absolutely and adamantly disagree and I can't ever agree that warnings must be dismissed as "distractions",

Warnings can be faked. Fear mongering can be lessened when people are informed on a subject.

> I extract from your statement here that you want us to move forward to a true AI no matter what.

Where in the devil did you get that? I said no such thing. I think investments towards building AGI are largely a waste of money. If people want to spend their money on that, that's fine. I won't.

> Right now AI looks like snake oil selling, you realize that, right?

AGI does, yes. And, for those who do not understand the difference between data science and AGI, then probably all "AI tech" seems like snake oil. But it isn't. It's already helping doctors detect cancer, among other major issues.

> Unless of course you're implying that the strife for inventing a true AI (=AGI) is a practical work

I'm not. I have no idea where you got that from my comments.


Welp, my mistake. Might have been me triggered since the term "AI" is practically meaningless nowadays.


When you know AGI is possible with existing techniques, it is already too late to discuss the risks. As soon as it is possible with existing techniques, it will be implemented by somebody, and the risks will be realized.


You already have Joshua Tenenbaum and Karl Friston. You're just ignoring them in favor of celebrities who talk about AI rather than mere cognition, and have billions in funding and years of professional self-marketing to throw around.


To quote Bret Victor (from memory): worrying about general AI when climate change is happening is like standing on the train tracks in the station when the train is rushing in, worrying about being hit by lightning.

Work is being done on climate change, but at the current rate, we're way off preventing catastrophic effects. I'd say that our current level of effective effort basically amounts to ignoring the problem.


Au contraire, climate change will not pose an existential risk within the next 30 years. The chance of that happening is pretty much nil and no one serious argues otherwise. Even in 100 years, there could be much suffering and dislocations, but it still most likely won't be an existential risk to all of humanity.

There is a non-negligible possibility that AGI will be invented in 30 years. Many AI experts agree on that. A number of experts also believe that its invention could be highly beneficial or catastrophic, depending on its form and our preparation.

With cost-benefit analysis based on the best knowledge we have weighed by probabilities, it is clear that AGI risks are more substantial and worth at least as much investment as climate change. The current funding for AI Safety Research is not even 1/10th, perhaps less than 1/100th, of climate change funding. Inaction is also an action.

If you do not trust intelligent domain experts, and also intelligent non-experts with almost no conflict of interests like Bill Gates and Stephen Hawking, then please let us know which source(s) of knowledge we should rely on instead.

Note: I believe we should fund both. A certain but slow train wreck and an uncertain but even more catastrophic and possibly speedier train wreck are both worth preventing.


Au contraire, climate change will not pose an existential risk within the next 30 years. The chance of that happening is pretty much nil and no one serious argues otherwise.

That's not true. Arctic methane release has the potential to become devastating within a small number of decades. From [1]: Shakhova et al. ... conclude that "release of up to 50 Gt of predicted amount of hydrate storage [is] highly possible for abrupt release at any time". That would increase the methane content of the planet's atmosphere by a factor of twelve. That would be catastrophic by anyone's measure.

Also: In 2008 the United States Department of Energy National Laboratory system identified potential clathrate destabilization in the Arctic as one of the most serious scenarios for abrupt climate change.

[2] is a video from the Lima UN Climate Change Conference discussing this. The people talking about this are certainly not "no-one serious". See 34:23 for Ira Leifer, an Atmospheric Scientist at the University of California, saying that 4 degrees of warming means the Earth can probably sustain "a few thousand people, clustered at the poles".

I see 4 degrees of warming being bandied about a lot these days, but very little discussion of how catastrophic that would be. It also seems a lot more likely than AGI becoming a problem, to me at least. Sure, we should fund research into both but only one of them has me worried about the world my daughter will inherit - one is basically purely hypothetical right now.

    [1] https://en.wikipedia.org/wiki/Arctic_methane_emissions
    [2] https://www.youtube.com/watch?v=FPdc75epOEw


Climate change is not adversarial, the weather will not try to disrupt and avoid you fixing it.

Fighting a passive adversary is a much easier thing than fighting an active one (especially one smarter than you).


Actually, when it comes to climate change, humans are doing a pretty good job at being adversarial to our own best interest already. We don't need anything much more intelligent than us to avoid fixing the problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: