Hacker News new | past | comments | ask | show | jobs | submit login

An electrical mishap could kill one, maybe a few people. It's still something you need to be concerned with, but I understand your point.

The reason your analogy doesn't work is that a transhuman AI could destroy the entry human race.




But the AI-kills-the-world hypothesis has no basis in reality, whatsoever. It's fiction. We might as well be writing computer viruses to help protect against alien invasions (that's the plot of Independence Day). It's a completely baseless fear stemming from our complete ignorance of what AI might be like - just utter science fiction.


> a transhuman AI could destroy the entry human race

A perfectly ordinary human (wearing general's stripes) could also destroy the human race. Today. With 1950s technology, no less.

A biotech specialist with fairly ordinary training and a few $10k could probably achieve the same end with an engineered plague, also with current technology.

One or the other of these scenarios may or may not take place before we exhaust the non-renewable resources to which our civilization is addicted and regress into permanent barbarism.

Give me "death by AI" any day of the week, over that.


Other risks exist, therefore what?


> Other risks exist, therefore what?

Therefore "it might be an existential risk!" is not a root password to my conscience.


There any many possible ways civilization could end. There are plenty of natural disasters (think supervolcanoes, or asteroid impacts) that could also destroy civilization. I don't see the harm in thinking about preventing one of them.


> I don't see the harm in thinking about preventing one of them

There is indeed harm. Talented people are being diverted into masturbatory philosophizing rather than building the future.

My personal opinion is that human industrial civilization's goose is already cooked, and that a transhuman intelligence may or may not help us out of our mess. Human intelligence almost certainly won't.

The prevalence of the status quo bias of assuming that continuing as we are, AI-less, is "safe" - turns my stomach.


If they're not taking any money from the government, what do you care what other people study? Research is like buying lottery tickets, except you have no idea how big the payoff could be.

If you think 'our goose is cooked' and a transhuman intelligence could help us out, doesn't it make sense to support the development of a transhumanist intelligence?


> what do you care what other people study?

I watched people with genuine potential (Eliezer Y., for instance) turn from groundbreaking AI work to writing "AI might kill us all!" screeds and recycled mathematics.

A decade ago I was half-certain that he would eventually invent an artificial general intelligence. Now I am equally certain that he never will. Philosophizing and screaming "Caution!" is simply too much fun - and too lucrative. Ever wonder why he doesn't have to slave away at a day job like the rest of us?

> Research is like buying lottery tickets, except you have no idea how big the payoff could be

The "Friendly AI" crowd is engaged in navel gazing, rather than research.

> doesn't it make sense to support the development of a transhumanist intelligence?

Yes, and I do support it. Whereas the Friendly AI enthusiasts are retarding such development, not only by failing to volunteer their own efforts but through frightening and discouraging others.


I partly agree with your points, except the last. I doubt any researcher is ever discouraged by the fearmongering you describe.


We know a lot more about how those work, which may well be a necessary condition when organizing prevention.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: