The paperclip maximizer generally refers to a AGI with a value system that is not aligned with humans. An AGI smart enough to achieve its goals (making paperclips) so efficiently that it becomes a threat to humans through sheer resource consumption.
So it's not a good example of dumb-tiger-AIs occasionally becoming a threat to humans, which still on average are able to outcompete a tiger with ease.
I am thinking about systems set up to protect us can end up hurting us because in order for them to be helpful they get so much power over us that their continued improvements become fundamental to our survival beyond the scope of our ability to understand it.
The "system" does not need to have intent or be even closely aware to be dangerous to humans.
And so the article sets up a false premise if the quoted conclusion is to be the base of judging whether it's going to be a threat to us or not.
https://wiki.lesswrong.com/wiki/Paperclip_maximizer