Hacker News new | past | comments | ask | show | jobs | submit login

A lot of research about avoiding extinction by AI is about alignment. LLMs are pretty harmless in that they (currently) don't have any goals, they just produce text. But at some point we will succeed in turning them into "thinking" agents that try to achieve a goal. Similar to a chess AI, but interacting with the real world instead. One of the big problems with that is that we don't have a good way to make sure the goals of the AI match what we want it to do. Even if the whole "human governance" political problem were solved, we still couldn't reliably control any AI. Solving that is a whole research field. Building better ways to understand the inner workings of neural networks is definitely one avenue



Intelligence cannot be 'solved', I would go on to further say that an intelligence without the option of violence isn't an intelligence at all.

If you suddenly wanted to kill people, for example, then could probably kill a few before you were stopped. That is typically the limits of an individuals power. Now, if you were a corporation with money, depending on the strategy you used you could likely kill anywhere from hundreds to hundreds of thousands. Kick it up to government level, and well, the term "just a statistic" exists for a reason.

We tend to have laws around these behaviors, but they are typically punitive. The law realizes that humans, and human systems will unalign themselves from "moral" behavior (whatever that may be considered at the time). When the lawgiver itself becomes unaligned, well, things tend to get bad. Human alignment typically consists of benefits (I give you nice things/money/power) or violence.


I see. Thanks for the reply. But I wonder if that’s not a bit too optimistic and not concrete enough. Alignment won’t solve the world’s woes, just like “enlightenment” (a word which sounds a lot like alignment and which is similarly undefinable) does not magically rectify the realities of the world. Why should bad actors care about alignment?

Another example is climate change. We have a lot of good ideas which, combined, would stop us from killing millions of people across the world. We have the research - is more “research” really the key?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: