It isn't vague if human values is well defined. You can just look at your technical dilemma and go through the list of human values. If your technical solution compromises any of those values it is your moral obligation as a subscriber to the ACM ethical values to revise or otherwise mitigate your judgement.
As far as I can tell, it's not defined in that document. If it were, what would that look like? What values would be on the list? How would you know whether a given action compromises one of those values?
A good code of ethics includes aspects that require more context and incite more thinking. Maybe this particular rule is designed to make you ponder what you believe humanism is, and are you making decisions in accordance to it.
Socrates wasn't providing laundry lists of rules -- but his method certainly helped many people think better and pursue better courses of action.
> A good code of ethics includes aspects that require more context and incite more thinking.
I agree that a good code of ethics could be that way. But I think that's less true if the code of ethics is coupled with a system of punishment. In a system where violations are punished, I think it's desirable for the rules to be as objective as possible. Otherwise, you risk arbitrary application and/or abuse of the rules. Further, well-meaning people should be able to read the rules, understand what is and is not permitted, and tailor their behavior accordingly. They should not have to worry that they will later be punished for breaking a rule they thought they were following.