> Who prompted the future LLM, and gave it access to a root shell and an A100 GPU, and allowed it to copy over some python script that runs in a loop and allowed it to download 2 terabytes of corpus and trained a new version of itself for weeks if not months to improve itself, just to carry out some strange machiavellian task of screwing around with humans?
> The human being did.
I generally agree with you and think the doomerists are overblown, but there's a capability argument here; if it is possible for an AI to augment the ability of humans to do Bad Things to new levels (not proven), and if such a thing becomes widely available to individuals, then it would seem likely that we get "Unabomber but he has an AI helping him maximise his harm capabilities".
> it's a better sell for the real tasks in the real world that are producing real harms right now.
> The human being did.
I generally agree with you and think the doomerists are overblown, but there's a capability argument here; if it is possible for an AI to augment the ability of humans to do Bad Things to new levels (not proven), and if such a thing becomes widely available to individuals, then it would seem likely that we get "Unabomber but he has an AI helping him maximise his harm capabilities".
> it's a better sell for the real tasks in the real world that are producing real harms right now.
Strongly agree.