Hacker News new | past | comments | ask | show | jobs | submit login

> I can only imagine Eleizer Yudkowsky and Rob Miles looking on this conversation with a depressed scream and a facepalm respectively.

Whenever Yudkowsky comes up on my Twitter feed I'm left with an impression that I'm not going to have any more luck conversing AI with those in his orbit than I am discussing the rapture with a fundamentalist Christian. For example, the following Tweet[1]. If a person believes this is from a deep thinker that should be taken very seriously rather than an unhinged nutcase, our worldviews are probably too far apart to ever reach a common understanding:

> Fools often misrepresent me as saying that superintelligence can do anything because magic. To clearly show this false, here's a concrete list of stuff I expect superintelligence can or can't do:

> - FTL (faster than light) travel: DEFINITE NO

> - Find some hack for going >50 OOM past the amount of computation that naive calculations of available negentropy would suggest is possible within our local volume: PROBABLE NO

> - Validly prove in first-order arithmetic that 1 + 1 = 5: DEFINITE NO

> - Prove a contradiction from Zermelo-Frankel set theory: PROBABLE NO

> - Using current human technology, synthesize a normal virus (meaning it has to reproduce itself inside human cells and is built of conventional bio materials) that infects over 50% of the world population within a month: YES

> (note, this is not meant as an argument, this is meant as a concrete counterexample to people who claim 'lol doomers think AI can do anything just because its smart' showing that I rather have some particular model of what I roughly wildly guess to be a superintelligence's capability level)

> - Using current human technology, synthesize a normal virus that infects 90% of Earth within an hour: NO

> - Write a secure operating system on the first try, zero errors, no debugging phase, assuming away Meltdown-style hardware vulnerabilities in the chips: DEFINITE YES

> - Write a secure operating system for actual modern hardware, on the first pass: YES

> - Train an AI system with capability at least equivalent to GPT-4, from the same dataset GPT-4 used, starting from at most 50K of Python code, using 1000x less compute than was used to train GPT-4: YES

> - Starting from current human tech, bootstrap to nanotechnology in a week: YES

> - Starting from current human tech, bootstrap to nanotechnology in an hour: GOSH WOW IDK, I DON'T ACTUALLY KNOW HOW, BUT DON'T WANT TO CLAIM I CAN SEE ALL PATHWAYS, THIS ONE IS REALLY HARD FOR ME TO CALL, BRAIN LEGIT DOESN'T FEEL GOOD BETTING EITHER WAY, CALL IT 50:50??

> - Starting from current human tech and from the inside of a computer, bootstrap to nanotechnology in a minute: PROBABLE NO, EVEN IF A MINUTE IS LIKE 20 SUBJECTIVE YEARS TO THE SI

> - Bootstrap to nanotechnology via a clean called shot: all the molecular interactions go as predicted the first time, no error-correction rounds needed: PROBABLY YES but please note this is not any kind of necessary assumption because It could just build Its own fucking lab, get back the observations, and do a debugging round; and none of the processes there intrinsically need to run at the speed of humans taking hourly bathroom breaks, it can happen at the speed of protein chemistry and electronics. Please consider asking for 6 seconds how a superintelligence might possibly overcome such incredible obstacles of 'I think you need a positive nonzero number of observations', for example, by doing a few observations, and then further asking yourself if those observations absolutely have to be slow like a sloth

> - Bootstrap to nanotechnology by any means including a non-called shot where the SI designs more possible proteins than It needs to handle some of the less certain cases, and gets back some preliminary observations about how they interacted in a liquid medium, before it actually puts together the wetware lab on round 2: YES

(The Tweet goes on, you can read the rest of it at the link below, but that should give you the gist.)

[1] https://twitter.com/ESYudkowsky/status/1658616828741160960




I've already read that thread.

I don't have twitter and I agree his tweets have an aura of lunacy, which is a shame as he's quite a lot better as a long-form writer. (Though I will assume his long-form writings about quantum mechanics is as bad as everyone else unless a physicist vouches for them).

But, despite that, I don't understand why you chose that specific example — how is giving a list of what he thinks an AI probably can and can't do, in the context of trying to reduce risks because he thinks loosing is the default, similar to a fundamentalist Christian who wants to immanentize the eschaton because the idea the good guys might lose when God is on their side is genuinely beyond comprehension?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: