UPX only means smaller files on the disk, but it comes with a cost: it tends to increase memory requirements, because the binary on the disk cannot be mapped to memory anymore. Unless it's uncompressed somewhere in the filesystem.
Worse, if you run multiple instances of the same binary, none of them can be shared.
A bit simplified, without UPX, 100 processes of 100 MB binaries requires only 100 MB RAM for the code, but with UPX 10 GB.
Edit: In reality, likely only a fraction of that 100 MB needs actually to be mapped into memory, so without UPX true memory consumption is even less than 100 MB.
All true, but I think a compressed iso9660fs can actually support dynamic paging - the pages are decompressed into memory, obviously, but can be demand paged without staging them to media.
Can you expand on this a bit? I use upx at work to ship binaries. Are you saying these binaries have different memory usage upx’d than they do otherwise?
Normally operating system simply maps binaries, executables and loadable libraries (.dylib, .so, .dll, etc.) into memory. The cost is approximately same whether you do this once or 1000 times. The code is executed from the mapped area as-is.
However, when a binary is compressed, this cannot work, because in the file the binary is represented as a compressed data. The only way you can work around is to allocate some memory, decompress the binary there, map the region as executable and run it from there. This results a non-shareable copy of the data for each running instance.
Also impacts startup time. Really it's only appropriate for situations like games where you're very confident there will be just one instance of it, and it'll be long-running.
And even then, it's of dubious value when game install footprints are overwhelmingly dominated by assets rather than executable code.
I'm curious, was the practice of using upx there before you got there? We generally A/B test changes like this pretty thoroughly by running load tests against our traffic and looking at things like CPU and Memory pressure in our deploys.
Worse, if you run multiple instances of the same binary, none of them can be shared.
A bit simplified, without UPX, 100 processes of 100 MB binaries requires only 100 MB RAM for the code, but with UPX 10 GB.
Edit: In reality, likely only a fraction of that 100 MB needs actually to be mapped into memory, so without UPX true memory consumption is even less than 100 MB.