Late reply, but the most common ‘advice’ (usually in the form of post-mortums) I have seen is that knowing the total allocation size for a program is an engineering analysis based on memory availability for a given system and estimated worst-case bounds for memory usage. The system is then designed to work within the constraints generated by that analysis.
Pre-allocating certainly requires a system/application designer to consider the most likely worst case resource usage and then enforce that estimate throughout the design.
Further, for systems that have modern memory sized resources (i.e. 16gb and up) the system is designed to work as though memory was a stack and then that stack is allocated from system resources using something like a growable array, or slab style allocator. So that if the estimated wort case usage is reached, the ‘static memory allocation’ can be relocated and enlarged or added onto with a new slab.
Your comments imply the sticking point may be the estimate of program memory usage, and that is a very fact/situational analysis.
Pre-allocating certainly requires a system/application designer to consider the most likely worst case resource usage and then enforce that estimate throughout the design.
Further, for systems that have modern memory sized resources (i.e. 16gb and up) the system is designed to work as though memory was a stack and then that stack is allocated from system resources using something like a growable array, or slab style allocator. So that if the estimated wort case usage is reached, the ‘static memory allocation’ can be relocated and enlarged or added onto with a new slab.
Your comments imply the sticking point may be the estimate of program memory usage, and that is a very fact/situational analysis.