Sure. You could also multiply by 4, wasting at worst twice as much memory as optimal on pathological input.
But wait, why are you converting from utf-8 to utf-32? To iterate through? You don't need to count your codepoints for that, you can just advance through your utf-8 linearly. To index into it? You might as well index into byte offsets—sure, that makes some possible indices invalid, but so does the finite length of your buffer.
And what are you doing with the codepoints anyway? utf-32 turns out to be profoundly useless for pretty much every concrete use case, because its whole appeal, that its code units map 1:1 to code points, presupposes that there's something you'd want to do with the codepoints.
There isn't. Code points are practically orthogonal to all of: user-perceived characters, user-perceived graphical symbols, grapheme clusters, text width. They combine arbitrarily in ways that you cannot infer without knowing the semantics of every possible code point. There is no linguistically sensible text operation you can do at the codepoint level. They are completely useless to you.
The worst part is, there are a lot of operations you can do with codepoints that seem, to cursory inspection, to do something you want. But they're wrong, all of them, necessarily so, because you're working at a level of abstraction that does not have the information needed to do just about anything correctly, but the cases that are broken are ones where none of your developers even know what the correct result should be.
Have you ever met a use case for the number of codepoints in a unicode string?
(One that didn't involve the moral equivalent of "because codepoints are what Twitter counts"?)