We can't make anything the main goal of some human, because for us us (just like any other animal) the goal system is genetically hardwired to a bunch of complex feedback loops (hunger, sexual drive, pain, social status feedback, etc etc etc), and we can't turn off any of them, much less all of them, without fundamental alterations of something that's formed very early in evolution of multicellular life.
Any AGI system will also have some set of goals, just like we do. For us the goal is to (vaguely speaking) "feel good" and all the direct things that affect that, including feelings caused by anticipation of future events, etc - but this set of goals is pretty much arbitrary if these goals didn't have to survive through millions of years of natural selection and orthogonal to intelligence power.
I mean, if it doesn't have goals, then it won't do anything, it's not an agent. You could have very smart narrow systems that aren't "agentive" and just e.g. provide answers to very complex questions, but whenever we talk about general artificial intelligence we are talking about a system that has a feedback loop with the external world, i.e., it does stuff, observes the results, learns from that (i.e. is self-modifying in some sense) and decides on further actions - which means having some goals.
We don't have the luxury of designing humans from the ground up. Nor do we fully understand the structures underlying human motivation. Neither will be true (we expect) of AI, eventually.
It would be nice if there were some fundamental property of motivation that kept paperclip maximization from being an overriding goal of an intelligent entity, but we don't yet have particularly strong reasons to believe that this is the case.
If we can't make "paperclip maximization" the main goal of a some human, why do we expect to be able to make it the main goal of some AGI?