Thank you. I looked through it and it appears this is closest, but have not dug into it much yet. In beta but definately in right direction to what I envisioned.
there are a few problems that make this idea nasty:
1. input focus / window selection - lots of code relies on the idea that the currently selected window is a singleton, not 1..n_cursors. Adding support for multiple cursors breaks this assumption and code that looks something like active_window().do_something() would have to be biased against the selection tied to one cursor, breaking their independence. You'd ideally want a 1:1 match between keyboards and cursor input devices for this to be workable.
2. accelerated drawing - a very common (if not universal) display server trick is to separate mouse cursor rendering, own content (like window decorations) and accelerated content (like hardware video decoding and color keying) into different output planes with varying drawing restrictions. They update asynchronously in order to reduce memory bandwidth requirements and hide latency/jitter. The number of available such planes are typically low (1,2,4) and dynamic. Adding another mouse cursor breaks the simple '1 output plane @(last sample.xy) cursor.w*cursor.h' and forces either multiple planes to be consumed or the allocated plane size to be primary plane resolution size and updated every frame, which can impact other connected monitors and so on.
3. absolute vs. relative mouse samples / warping - this varies more between windowing systems but some applications deals with a mouse in terms of display- absolute, surface- absolute or device relative samples. At the same time, different mouse devices (touchpad, "normal" mouse, tablets, ...) provides samples in different types (device-relative, device-absolute) and someone needs to do the translation. This matters in situations like FPS games.
4. nested - it is not uncommon for applications to "hide" the global mouse cursor and then maintain their own inside the context of its windows. Games in particular like to do this, but also VMs and similar oddballs. Someone needs to do the translation and it typically becomes dx,dy = last_mp - current_xy then "warp mouse to window midpoint".
5. sample optimizations - some systems like to provide a memory mapped global mouse cursor position to keep storms away from window event loops (think gaming mice or drawing tablets with 1-2+ kHz sample rates being forced into relatively small event queues).
There are hacks to work around all these problems but not in a universal "won't suddenly break for your use-case" way. The most common I've seen is simply to make the other mouse cursors into non-decorated transparent windows that they move around, check window underneath the 'fake cursor' and inject into window event queues...
A modern twist on this. I'm really dismayed two people can't touch control an iPad at the same time. Just doing a puzzle with your grandpa on the iPad and you have to take turns moving the peices. Very unnatural.
Fun fact: There's a demo video from Xerox floating around the internet, where the researchers showed off GUIs that were controlled by both a mouse and a trackball.
One example was using the mouse to paint and the trackball to move a pallette-like widget around that was used to define/store colors and patterns (and functioned as an ad-hoc clipboard).
W.r.t. intuitive design, I find it kind of sad that type of interface wasn't developped further. I saw some of the same ideas pop up in Valve's VR demos lately, though.
That's not really, what I'm referring to, though. I'm more interested in the "palette-style" interaction - say your left hand opens a menu or selects an object while your right hand can perform "manipulations" on that menu. I guess a really interesting modern spin on that would be the combination of pen and the dial wheel that come with the new Surface Studio from Microsoft.
Game controllers, while also an interesting study, emerge from a totally different approach and seem to be mostly "figured out" nowadays, although even so now and then there's a different take on controller usage (e.g. the controls of Brothers: A Tale of two Sons).
Slightly off-topic but given the timing I couldn't help myself. April Fools Day 2012, Google Chrome announced Chrome Multitask Mode: https://www.youtube.com/watch?v=UiLSiqyDf4Y
Side note: I have the direct opposite wish: I want a single mouse that can move off the boundary of one PC's monitor(s) and into the monitor of my laptop or across two or more desktop computers without having to change hardware. Same thing with a single keyboard.
I can second this recommendation. I don't use it much nowadays, but it's done good work for me in the past, and I've gladly paid them the 20$ for a Pro license.
I have to say that they haven't grasped the whole "open source on GitHub" thing yet though. It took me over a year to get a PR accepted that fixed a simple Qt5 #include problem...
Does it have working encryption yet? It's been a while since I've used it, I stopped because at the time you were basically sending all keystrokes in plaintext over the network.
If you can step outside the confines of existing desktop environments (i.e. you are writing your own user interface), you can get event streams from each pointer device independently by reading `/dev/input/*`.
I dont recall ever seeing it either. I work on 2 or 3 monitors at once with a tablet pen in one hand (photoshop)and it would be nice to use my other with mouse to open tabs etc. In my particular case I noticed a number of ways it would help me.