Agreed. Fun fact: Wolfram alpha understands Mebibytes, which is useful if you want to quickly convert between networking specs (say megabits) and "real" computer units.
Then again, maybe doing more simple math by actually using one's brain wouldn't hurt either. :)
Edit: And yes, I'm aware that you'll never get the converted speed of what is written on the network device's box. But sometimes it's nice to have an upper limit you can compare to at least.
Many, many wire protocols use 5 bits of bandwidth to send 4 bits of information, for various reasons. So dividing by 10 gives you a better estimate.
Of course when gigabit became a thing, your practical throughput was more like 75 MBps for a very long time, and being off by 25% in capacity planning is a pretty big error (one I've seen numerous engineers make, and a few make both, which means you're off by 40%)
After using many Unix tools that have this convention, I'm ok with 10 M referring to 1010241024 bytes (10 MiB), contrasted with 10 MB meaning 10,000,000 bytes.
That amount of sloppiness in any other engineering discipline would just finish you off immediately.