Speaking about RISC-V, no it is not. In RISC-V "C" all 16 instructions have their 32 bit counterparts. When front-end reads in an instruction word (32 bits) it extracts two 32 bit ops from it then feeds them serially to decoder. So, there's only one same decoder that does the work both for 16 and 32 bit ops (basically it does not distinguish them), and that's also what makes macro op fusion possible and easy to implement, unlike ARM's Thumb which has two separate decoders with all the consequences.
I'll surely read ARMv4T specs when I will have a bit more free time, thanks :). But, ARM requires switching machine mode to select instruction set (you cannot mix Thumb with regular 32 bit), which kind of hints a selection of decoder takes place. In RISC-V, albeit it's up to micro-arch designer to choose, only one decoder is needed and you can have a mixture of 16 bit and 32 bit instructions in the program flow. What's more, with macro-op fusion feature, two consequent "C" instructions can be viewed as one "unnamed" instruction that does a lot more work. Bit more details from RISC-V authors on the subject along with benchmarks:
https://riscv.org/wp-content/uploads/2016/07/Tue1130celio-fu...