There can be lots of other factors that make a particular refactor more or less desirable. Is the code actually that long or not really, is it already complex or straightforward, was it written well in the first place, etc. Without seeing the particular code, people can jump to any conclusion or justify any bias towards or against any particular refactor including attempts at DRY.
My experience has been that the worst code was also the most poorly tested, if at all. In many cases, you can't really test the code without refactoring it, but you can't refactor it without risking regressions due to lack of tests.
Breaking this cycle requires going back to requirements, whether explicit or by painstakingly inferring every valid use case supported by the intentions of the original code, even if its defects meant it couldn't actually serve those use cases anyway and so cannot even act as a reference implementation for those use cases.
Once you've understood the intended behavior of the old code enough, you now have a test suite to use for any future code. This is usually the hard part [1], and it's going to seem that way because once you have it, finding the simplest code that passes all tests is just programming. Importantly, even if a future maintainer disagrees with you on the best solution, at least they can rewrite it without worrying about regressions against any tested case.
Aside: Performance regressions are more difficult to detect but preparing standard test workloads is a necessary part of that too.
After you ship this rewritten solution you're going to get user issue reports that you broke some edge case nobody has ever explicitly considered before but someone had somehow come to rely on. Now you're only adding a test and logic for that one edge case, you know that no other case was broken by this change, and that this case will never be broken again.
Now you have leverage over the complexity of the project instead of giving it leverage over you. Now you're free to refactor any which way you prefer, and can accurately judge the resulting code entirely on its own merits. You know you're comparing two correct solutions to the same problem, neither one is subtly hiding bugs or missing edge cases. Your code reviewers will also find it much easier to review the code because they can trust that it works and focus the review on its other merits.
[1] You know if your problem domain is an exception better than I do, like if you work on LLVM.
My experience has been that the worst code was also the most poorly tested, if at all. In many cases, you can't really test the code without refactoring it, but you can't refactor it without risking regressions due to lack of tests.
Breaking this cycle requires going back to requirements, whether explicit or by painstakingly inferring every valid use case supported by the intentions of the original code, even if its defects meant it couldn't actually serve those use cases anyway and so cannot even act as a reference implementation for those use cases.
Once you've understood the intended behavior of the old code enough, you now have a test suite to use for any future code. This is usually the hard part [1], and it's going to seem that way because once you have it, finding the simplest code that passes all tests is just programming. Importantly, even if a future maintainer disagrees with you on the best solution, at least they can rewrite it without worrying about regressions against any tested case.
Aside: Performance regressions are more difficult to detect but preparing standard test workloads is a necessary part of that too.
After you ship this rewritten solution you're going to get user issue reports that you broke some edge case nobody has ever explicitly considered before but someone had somehow come to rely on. Now you're only adding a test and logic for that one edge case, you know that no other case was broken by this change, and that this case will never be broken again.
Now you have leverage over the complexity of the project instead of giving it leverage over you. Now you're free to refactor any which way you prefer, and can accurately judge the resulting code entirely on its own merits. You know you're comparing two correct solutions to the same problem, neither one is subtly hiding bugs or missing edge cases. Your code reviewers will also find it much easier to review the code because they can trust that it works and focus the review on its other merits.
[1] You know if your problem domain is an exception better than I do, like if you work on LLVM.