The `map` step converted the data to a smaller type, and the whole chain was smart enough to reuse the original Vec without reallocating. Since the original item size was [much bigger], there was a lot of "free" capacity left over.
It is indeed a bit surprising, but seems entirely reasonable for the platform to do imo. It's efficient, in both memory (allocation) and CPU. If you want a specifically-sized container, you have to make sure that happens, otherwise you're letting things be optimized in language-general ways. That will frequently be wrong in edge cases, and may change at any time.
It is indeed a bit surprising, but seems entirely reasonable for the platform to do imo. It's efficient, in both memory (allocation) and CPU. If you want a specifically-sized container, you have to make sure that happens, otherwise you're letting things be optimized in language-general ways. That will frequently be wrong in edge cases, and may change at any time.