That's never been a problem on any system I've worked on. If you assume a web application with logged in users who are occasionally updating their data, the percentage of users who will have performed a write within the last 10s is always going to be absolutely tiny.
The absolute rate is the problem. You can't shed that load to other machines so the percentage of users or percentage of traffic doesn't matter. This is basically a variant of Little's Law, where server count has a ceiling of 1.
100 edits a minute from distinct sessions is 1000 sessions pinned at any moment. If they read anything in that interval it comes from the primary. The only question is what's the frequency and interval of reads after a write.