Hacker News new | past | comments | ask | show | jobs | submit login

When you have no clue, it makes sense to expand your confidence interval in both directions. So it could be a lot longer than we expect, but it could be shorter, too. You shouldn't just say "we have no clue, so it's probably further off and not worth worrying about". Especially since a lot of people made predictions about LLM capabilities and were surprised by how much better they worked than expected.



I agree in principle, it's just that the level of expressed worry doesn't seem to match reality. Currently we have no reasonable path to 'scary AGI'. It's some yet newfangled tech we haven't discovered.

As an example, consider the invention of motion picture. People were totally bewildered that you can have moving things and people inside a picture. Scaremongers could start claiming "Pretty soon the moving things may come to life and take over the world! Before you know it, they'll run our factories from inside the movies and we'll be their slaves!" That's more or less what this 'scary AGI' hype sounds like to me right now.

Btw "That Mitchell and Webb Look" is a great show ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: