One reason could be that this tends to require a lot of fiddling. If you take the Rome approach, you can (theoretically) have the community do the necessary fiddling collectively and have it work for everyone. If they're alright with the standard it provides.
I know from experiences in the last few days that even things like standardjs and xo, despite being described as zero-config linting/fixing solutions, don't reliably work out of the box.
In addition to this, one reason the JS tools of today are relatively slow is because they all need to parse the code. Babel parses it to compile it, then ESLint parses it again to lint it, then Webpack/Rollup/Parcel parses it again to bundle it. A single tool that reuses the same AST representation for all three tasks has the ability to become a lot faster.
Yes and no. ESLint doesn't need to run at build time, and I assume that if you integrate Babel and Webpack via webpack "loaders", some amount of work is saved (just guessing though). That said, yours is the only good argument I've seen for building a monolithic solution.
It has to do it that way, since Babel could parse some syntax that Webpack doesn't recognise (like Flow or TypeScript annotations). Webpack loaders can even take non-JS input and produce JS output (eg. "svgr" takes SVG images and produces React components) so I don't think Webpack can even assume that the input is JS at that point.
And yet when it comes time for Webpack to crawl import statements, it can do so across all different kinds of files coming in through their own loaders. So I'm wondering if Webpack gets presented with some kind of structured "imports manifest" by the loaders, in which case I think it could skip its own parsing step.
This seems enormously (and needlessly?) ambitious. Why not just bundle up the standard tools in a ready-to-use package?