We can argue about what "well" means I guess. On many problems the gap will be small, but you should never expect NB to out-perform the better models. And on lots of problems, the difference will be substantial. NB is strictly worse than MaxEnt.
Why settle for "quite well"? Just use the better algorithm, always. It's important to understand when solutions have been superseded.
I guess my point is that pragmatic progress in science is hardly ever as earth-shattering and all-changing as the academic circles flaunt it to be (understandably so, in these "publish or perish" times).
"Old stuff" doesn't suddenly become rubbish just because "new stuff" comes out.
Even when the new stuff offers a better, more predictive/complete model of data, there's typically a class of problems where the improvement is inconsequential (cf. gravity ala Newton vs. general relativity).
But yes, when you understand the implications of switching, and the cost is simply a different `pip install`, switch away.
Why settle for "quite well"? Just use the better algorithm, always. It's important to understand when solutions have been superseded.