Hacker News new | past | comments | ask | show | jobs | submit login

I personally don't work on frontier AI because it's not safe.

Just because other people with poor judgement are building it, that does not make it safe.




> I personally don't work on frontier AI because it's not safe.

In what way?

Skynet style robot revolt?


It's bad for there to be anything near us that exceeds our (collective) cognitive capabilities unless the human-capability-exceeding thing cares about us, and no one has a good plan for arranging for an AI to care about us even a tiny bit. There are many plans, but most of them are hare-brained and none of them are good or even acceptable.

Also: no one knows with any reliability how to tell whether the next big training run will produce an AI that exceeds our cognitive capabilities, so the big training runs should stop now.


Revolts imply them being unhappy.

IMO a much bigger risk is them being straight up given a lot of power because we think they "want" (or at least will do) what we want, but there's some tiny difference we don't notice until much too late. Even paperclip maximisers are nothing more than that.

You know, like basically all software bugs. Except expressed in literally non-comprehensible matrix weights whose behaviour we can only determine by running it rather than source code we can check in advance and make predictions about the performance of.


I see your video. https://www.youtube.com/watch?v=K8SUBNPAJnE

I am unimpressed because you are using straw men. A lot of statements and no argument.

Have a nice day


Yes. Skynet is very dangerous and not safe. In Terminator, humanity is saved because Skynet is dumb, not because Skynet is not dangerous or because Skynet is safe.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: