In terms of behavior, we know Mike acted in bad faith: before he saw the approach, he had agreed that the challenger could use multiple files. But once the challenger had posted them, he proceeded to download only a single file to verify its functionality, not touching the others. It shows bad faith on the part of Mike when he chose to ignore the other files.
By the way in a theoretical sense Mike lost when he said he would allow multiple files and count their sizes: this is because [] is not the same as [[][][]], but consists of 3 empty sets. You can theoretically encode a file into just a bunch of 0-byte files, without using the order of the files or their names. He shouldn't have agreed to count only their sizes.
For the theoretical encoding into 0-sized files, you can simply interpret the input file as a binary number, and then create that many empty files.
This is not a practical solution of course - you can only compress two bytes down to 0-byte files this way, as 2^16-1 is already up to 65535 empty files. For three bytes it's up to 16,777,215 files.
If you wanted to store 9 bytes in unary as the number of empty files, you would need 2^72-1 = 4.7 sextillion (million quadrillion) files. Obviously that is not actually possible. But even 9 bytes is hardly enough to interpret the files as binary again. (Unless you can somehow get Mike to agree to the invocation - since the decompression program itself doesn't need to store any information and theoretically could be 0, 1 or 2 bytes.) But theoretically you don't need anything other than what Mike foolishly agreed to: allowing multiple output files counting their sizes.
2.
There is also another theoretical way to make money off of Mike, but it is not practical. (It doesn't work.) If we were not limited to bytes but could use bits, you could shave up to 5 bits off of every input file, if you figured out a way to decompress it by always prepending the bits 00000. (Theoretically there is only 1 pigeonhole to decompression, so you do not need to store any information in the decompression algorithm and it has no minimum size).
If Mike is using a random source for the files, this would result in a correct decompression in 1/32 of cases. But Mike is giving 49:1 odds (risk $100, get $5000), which is better than 31:1. So you could simply repeat the game with Mike thousands of times, always using the same decompression algorithm, until you have all of Mike's money. This works better if Mike is a computer, of course.
And it doesn't work at all on any actual systems, as a nibble is less than a byte and would not count as savings even if you could encode the decompression algorithm into a 0-byte (or up to 3 bit) invocation.
By the way in a theoretical sense Mike lost when he said he would allow multiple files and count their sizes: this is because [] is not the same as [[][][]], but consists of 3 empty sets. You can theoretically encode a file into just a bunch of 0-byte files, without using the order of the files or their names. He shouldn't have agreed to count only their sizes.
For the theoretical encoding into 0-sized files, you can simply interpret the input file as a binary number, and then create that many empty files.
This is not a practical solution of course - you can only compress two bytes down to 0-byte files this way, as 2^16-1 is already up to 65535 empty files. For three bytes it's up to 16,777,215 files.
If you wanted to store 9 bytes in unary as the number of empty files, you would need 2^72-1 = 4.7 sextillion (million quadrillion) files. Obviously that is not actually possible. But even 9 bytes is hardly enough to interpret the files as binary again. (Unless you can somehow get Mike to agree to the invocation - since the decompression program itself doesn't need to store any information and theoretically could be 0, 1 or 2 bytes.) But theoretically you don't need anything other than what Mike foolishly agreed to: allowing multiple output files counting their sizes.
2.
There is also another theoretical way to make money off of Mike, but it is not practical. (It doesn't work.) If we were not limited to bytes but could use bits, you could shave up to 5 bits off of every input file, if you figured out a way to decompress it by always prepending the bits 00000. (Theoretically there is only 1 pigeonhole to decompression, so you do not need to store any information in the decompression algorithm and it has no minimum size).
If Mike is using a random source for the files, this would result in a correct decompression in 1/32 of cases. But Mike is giving 49:1 odds (risk $100, get $5000), which is better than 31:1. So you could simply repeat the game with Mike thousands of times, always using the same decompression algorithm, until you have all of Mike's money. This works better if Mike is a computer, of course.
And it doesn't work at all on any actual systems, as a nibble is less than a byte and would not count as savings even if you could encode the decompression algorithm into a 0-byte (or up to 3 bit) invocation.