Hacker News new | past | comments | ask | show | jobs | submit login

> Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can't live lives only in fear.

Building nukes and bioweapons aren't as good of a business model as AGI though. The government was incentivised to at least take some precautions with nukes. Nukes can't be developed and launched by individual bad actors. AGI isn't that comparable to nukes for numerous reasons. Bioweapons maybe, but I wouldn't be in support of companies researching bioweapons without regulation.

It's not a choice between living in fear and going full steam ahead. Both are idiotic positions to take. The reasonable approach here would be to publicly fund alignment research while slowing and regulating AI capability research to ensure the best possible outcomes and minimise risk.

You're basically arguing in favour of a free market approach to developing what has the potential to be a dangerous technology. If you wouldn't allow the free market to regulate something as mundane as automobile safety, then why would you trust the free market to regulate AI safety?

Companies who wish to develop state-of-the-art AI models should be required to demonstrate they are taking reasonable steps to ensure safety. They should be required to disclose state-of-the-art research projects to the government. They they should be required to publish alignment research so we can learn...




> Building nukes and bioweapons aren't as good of a business model as AGI though

I agree. It's quite possible that humanity-ending AI is also not a good business, don't you agree?

I think the whole Apocalypse discussion is premature distraction for the moment. A more improtant discussion is what kinds of AI will end up making money. We have already seen how the internet turned from an infinite frontier to a more modern version of TV dominated by a few networks with addictive buttons. Unfortunately we will see the same with AI becuase such is the nature of money today, and capitalism is one thing that AI will not change. The applications of AI that make the most money will dominate, to the detriment of applications that are only benefiting small groups of people (such as the disabled).

> to publicly fund alignment research while

We don't really know if alignment research is what we need. Governments should fund AI research in general, otherwise it would be like the early attempts of the EU to regulate AI. In fact any kind of funding of AI ethics at the moment is dubious because it is changing so fast . Stopping it for six months will not solve those ethical issues either, it will just delay their obsolescence by six months. This is stupid on the face of it


For public funding in AI research to work it would need to overwhelm private research AND not be exploited by bureaucrats.

Neither of these seem remotely realistic to happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: