If a robot misbehaves, it'll either be blocked or it'll go to the networks abuse section and that bot will be taken down. That a site possibly could have some kind of technical solution to this doesn't matter.
Precisely - the solution here needs to be that the server blocks the robot - if it can differentiate it from other traffic that is. That's all well and good and that's the solution which should be used here. If you don't want to be archived, block the IP.
Nope, if a website wants such a restriction, it must enforce it. Robots.txt is a request. It's worthless.