I say it is good if people with bad intentions blow themselves up from following questionable instructions, and also good to keep them paranoid about it.
I only meant to challenge the suggestion that no meaningful harm comes from this kind of misinformation. A bomb maker hurting themselves could hurt other people. Now probably this does not mean we should “blame” the AI, but it’s still important to understand what harms may come when we think about how to manage new technologies, even if the answer is “do nothing”.