I've noticed from pair programming that the person navigating with a mouse is far less able to read and interpret their surroundings or pick up typos while typing, than an observer that simply has to watch what the other person is doing.
Like when clicking on a file in a directory you just entered and looking for the file, the observer can literally locate and point to the file for the mouse user 5-10x faster than the mouse operator.
The observer seems to interpret the information that results from the directory listing faster than the person who just did the double-click to enter the directory because they don't have the muscle coordination context switch and can immediately move to interpreting the results.
It's probably because mouse manipulation uses brain infrastructure that is more recently evolved, but observe-react is a lot earlier in the brain processing pipeline evolutionarily, and a lot more refined/involved.
Since I have a vision impairment, I'm sure the effect is amplified very much for me, but using the mouse is such a massive break in flow:
- First you have to lift one hand up off the keyboard and put it down on the mouse. This may or may not mean taking your eyes off the screen.
- Then you need to find the mouse pointer on the screen
- Then you need to aim for what is usually a relatively small target and move the pointer there.
- If you're right-clicking, the right-click menu usually presents more small targets you need to aim for.
- If you need to use the keyboard, again you have to move your hand over to the keyboard from the mouse.
For finding the pointer, I developed this unconscious habit of slamming the mouse pointer to the very top-left of the screen. It's difficult though when on someone else's machine, where your brain isn't used to the pointer velocity or where multi-monitor means that slamming the mouse to the top-left actually puts the pointer on another monitor.
People look at me in awe when I'm using a two-pane file manager but honestly not having to take your hands off the keyboard and not having to move your eyes off the screen gives so much better flow. It's also why I like the UI of Blender - one hand on the keyboard and one hand on the mouse at most times.
I think this is because writing software is so much more than operating switches and controls. I really hate pair programming for this reason, but I love industrial-style controls and protocols involving multiple people.
Back in the '80s I worked on a financial system (SWIFT interface) for an Italian bank. It went operational and we observed 2 operators effectively doing "pair operating". We just thought it was weird Italian style socialising - one had the keyboard and the other was chattering away with a commentary.
But they were surprisingly effective!
I accidentally learned when teaching a course at a site with too many people for the available machines, that pair exercises was very effective - I got lots more questions and overall learning went way up. If the pair discussed it and couldn't find an answer they would have the confidence to ask. On their own, neither would probably bother and just wait for me to go through things.
Like when clicking on a file in a directory you just entered and looking for the file, the observer can literally locate and point to the file for the mouse user 5-10x faster than the mouse operator.
The observer seems to interpret the information that results from the directory listing faster than the person who just did the double-click to enter the directory because they don't have the muscle coordination context switch and can immediately move to interpreting the results.
It's probably because mouse manipulation uses brain infrastructure that is more recently evolved, but observe-react is a lot earlier in the brain processing pipeline evolutionarily, and a lot more refined/involved.