Why should x-1 for x==0 be undefined for a signed type? The signed type can happily represent -1. Rather, x-1 is undefined for unsigned types because those have no representation of -1.
Yow, what the hell was I thinking when I wrote that? Let me try again.
Preliminary note: I'm looking at a draft version of the C9X standard, because that's what I have to hand.
When x is of an unsigned type, it's OK. (In particular, arithmetic overflow on unsigned types is defined to work modulo (max value + 1).)
When x is of a signed type, the bit-patterns corresponding to negative numbers are left somewhat up for grabs. It has to be either sign+magnitude, or 1's-complement, or 2's-complement. (I'm pretty sure this was not true in earlier versions of the standard.)
However, the result of applying a bitwise logical operator to a negative value is defined to be what you get by applying that logical operation to the bits. In other words, it's undefined (actually, unspecified, which isn't quite the same) what answer you get, but this isn't one of those cases where the standard permits implementations to cause the Moon to explode or pornographic email to be sent to your employer.
In particular, whatever value x&(x-1) has when x==0, it has a value and the whole expression (including the && (x!=0) bit) comes out false regardless.
So, I take back my earlier statement: according to my reading of (a draft of) the (latest) C standard, my version is in fact guaranteed to do the right thing in all implementations, even when x is of signed type.
I repeat that I haven't looked at earlier versions of the standard; I think they were less specific about how signed integer types can be represented, and I wouldn't be surprised if bitwise ops on negative numbers (at least) had entirely undefined behaviour then.
Why should x-1 for x==0 be undefined for a signed type? The signed type can happily represent -1. Rather, x-1 is undefined for unsigned types because those have no representation of -1.