I'm not so sure about that. If you need to squeeze out a little more performance, code that is "technically undefined" can be more portable than dropping to ASM.
I think LLVM should emit a warning on code with undefined semantics and generate DWIM instructions instead of UD2s.
It really depends on your motives. If you will be needing to port to new platforms in the future, it's better to have a hard-and-fast crash now, so you can learn and avoid the undefined behavior now, rather than face a large bug backlog years down the line.
But you're correct that sometimes it can be expedient to exploit such technically undefined behavior. (I've committed this sin myself, most commonly in serializers/deserializers)
I'm not so sure about that. If you need to squeeze out a little more performance, code that is "technically undefined" can be more portable than dropping to ASM.
I think LLVM should emit a warning on code with undefined semantics and generate DWIM instructions instead of UD2s.