I understand that it likely "won't matter". My point was to ask if it was worth talking about outliers to the Never Rewrite law.
eg it's assumed when talking about refactoring over rewriting that a large portion of features is working. There should be some percentage where it's worth rewriting over refactoring. Or perhaps a size where it's small enough to easily rewrite.
(1) you fully understand (and you'd better be right about that)
(2) you have total control over already
(3) is small enough for (1) and (2) to be possible
(this is where I think a lot of people over-estimate their capabilities)
(4) where you have the ability to absorb a catastrophic mistake
(which usually isn't the pay-grade of the programmers)
and finally
(5) where you have a 'plan-B' in case the rewrite against all odds fails anyway
None of these are absolutes, if there is no business riding on the result then you can of course do anything you want. The history of IT is littered with spectacular failures of teams that figured they could do much better by tossing out the old and setting a date for the deploy of the shiny new system. Whatever you do make sure that your work won't add to that pile.
The older, the larger, poorer documented, worse tested the system is the bigger the chance that it is not fully understood.
100's of thousands to millions of loc is a lot more problematic, many moving parts and weird interplay is to be expected.