Hacker News new | past | comments | ask | show | jobs | submit login

> How wonderful it is to flip open the Surface and quickly type a 4 paragraph email response when I need to ... And switching between the two modes of interaction – sometimes typing, sometimes touching – is completely natural.

OK, let's assume that the Surface's keyboard completely solves the problem of not being able to write properly. That still leaves us with the problem of not being able to point properly.

I can't even imagine how the touchscreen could ever rival the precision of the mouse as a pointing device. The average adult human finger is simply too thick to select 5 characters from the middle of a word displayed in 10 points, or to drag a Photoshop layer 1 pixel to the right. Even a conventional trackpad on a cheap laptop has better precision than your finger does, though good luck finding actual graphic designers who prefer trackpads to actual mice. Styluses (styli?) aren't much better, unless your stylus is sharp enough to damage the screen. The fact that touchscreens don't allow you to fine-tune your aim before you click makes it even more difficult to achieve precision.

How do we address this issue? How do we make touchscreen devices useful for those who need spatial precision? What would be the most natural way to add precise pointing abilities to a tablet computer without compromising the advantages of the touchscreen? Carrying around a cordless mouse doesn't seem to be a particularly elegant solution. What do you think? Is touchscreen+keyboard the future of personal computing, or is there always going to be a place for mice as specialty items for graphic designers and some other professionals?




> "How do we address this issue?"

The way people have already done so in touch software to date?

You program 'un-pinch to zoom' to zoom the desired elements allowing increasing levels of accuracy as needed. And in the cases that you need 'pixel perfect' accuracy [1] you simply include "bump" UI controls or expose explicit pixel coordinates that can themselves be altered to affect the desired movement of the layer or selection or what-have-you (something even keyboard/mouse UI usually offers).

Precision is a largely solved issue in touch software. The real problem that will keep mice around in a largely-touch-driven world, is the simple ergonomics of spending eight hours at a desk. (i.e. Gorilla-arm.) [2]

[1] 'Pixel perfect' is a concept that makes increasingly less sense as displays reach and exceed 300dpi. Pretty soon we'll all be dealing with vectors and things will be better for it. 'Pixel perfect' accuracy is of mere transitory usefulness until then.

[2] Barring the development of a drafting-table-style variant of the original surface and either some sort of flawless arm/palm/accidental-touch rejection or a switch from 'any' touch to 'explicit-object' touch.

e.g. the desk ignores all contacts except from a pre-ordained 'pen', 'thimble' or 'glove'.


I'm afraid this doesn't work in all cases. When working in a reduced physical area, irrespective of pixel count, zooming in and snapping to boundaries is counter productive. Audio wave editing for example is an operation on cyclic ( obviously ) information, and when zooming in as a means of rationalising location, important context is lost.

Imagine a time line with a periodic wave, interrupted only by a one or two cycle click. Zooming in to normalise the ratio of object to finger leads to very easily losing context. That is, relative positioning left or right is lost. So it becomes frustrating zooming in and out in order to get your bearings again. Even attempting this on a trackpad is quite difficult, when compared to high resolution mice.

There are many cases where it's much better to have a large display area, combined with a high resolution mapping to that area. I could edit waves on a postage stamp sized display with my finger if I put my mind to it. I don't think I would be as productive as on a tablet sized display though. In other cases I need to increase yet above that ratio. I'm afraid stubby fingers on compensating scaled objects is not adequate always.


It sounds to me like you're conflating "the trouble with touch" with "the trouble with too-small-screens" and deciding the problem is touch.

But I'm guessing you don't edit waves with a keyboard/mouse on a 3, 4 or 9.7" screen either. So maybe "touch" isn't the obstacle you're really battling in the situation described.

Also, haven't people long had solutions where a 'work area' is zoomed for precision selection/editing while one-or-more 'larger context' views are maintained (or operates on its own zoom level) in another chunk of the screen?

Do wave-editing tools not behave like that?


Well, keeping to this example, there are sometimes conflicting requirements. A transient with a long train decay, such as a crash cymbal or knock, explosion or gunfire for example requires a wide view to properly observe the full affect. The trailing decay can last for quite a while in this scenario. The optimum situation here is to include a segment before and afterwards, or perhaps even more than that, depending, as some modulation becomes clearer the more you zoom out, not in.

At the same time, operating on selected segments is more efficiently done with finely controlled hairline cursors, where an obscuring object like a finger doesn't contend, generally. After this of course, zooming and other means of fine control and selection come into play.

In scenarios like this it is very much a case of not being able to see the forest for the trees if making a representation too large.

There are solutions to the problem of precise location, which I think include touch and gestures, though not necessarily solely through touch. In practice I use the right hand for precise hairline location and the left to zoom in with gestures, zoom out again for context and then iterate.

I'm not arguing for mice over touch. I'm looking at precision. I always find it quicker to type on my Bluetooth keyboard than on my iPhones screen keyboard. The reason for that for me has just to do with the ratio between active elements. Keyboard keys are larger than my fingers, on screen keys are smaller.

I actually think that in some cases gestures in the z plane, as well as x and y would be a way of adding capability.

These opinions are based on having to give up the USB mouse in the field, using JTAG's and external drives in a two port only MBP. Using the trackpad leads to much longer work times, simply because it's a less precise device.


Let me start off by saying I was originally taking issue with the idea that touch precision is a problem. That it can't work in certain cases and that we'll always need mice. And all that in a complaint that demonstrated a pretty narrow understanding of what has already been done with touch interfaces.

It was never my intention to argue that touch is always the preferable interface for all workloads (something I tried to convey by pointing out how mice will remain relevant for quite some time, due entirely to day-long workloads).

As applies to your concerns, I was just trying to suggest that workable solutions exist, even if they'll always be less-than-ideal for larger quantities of work.

As to your specific concern, I still think a workable solution may be out there, even if it remains undoubtedly less efficient than a mouse and a larger screen.

e.g. Wouldn't the sorts of drag and off-axis drag controls that are used for seek in many podcast/audio-player apps [1] address precision-selection in cases where too-much-zoom presents problems, and also obviate the concern about fingers obscuring the wave itself?

[1] click to 'grab' the selection-marker/nubby on the wave/timeline, drag across the x axis to seek and then down on the y axis to control the speed of seek -- typically doing more and more fine-grained seek for a given x-axis drag length, as the finger gets further from the wave/timeline


I came close to the drafting table experience with a pen-driven old-style a4 tablet (not a screen) that i had mapped to my screen coordinates. Hovering the pen over the tablet moved the mouse, and tapping clicked. At the time i felt it was a far superior way of working than a mouse. (Had to stop using it when i moved to the mac and couldn't find a driver). But even on that you would get gorilla arm from constantly moving your arm across the tablet, despite being able to lean on an elbow. I suspect a drafting table ui would suffer from the same fatigue issues as a touchscreen interface.


I would love a drafting table size touch interface. I think there's still a lot of space left to explore to find a way to reject "resting" touches. Beyond sensor / algorithm based approaches which might look at surface area or pressures, one could also go with some pretty simplistic solutions. Something as dumb as a foot bar connected to a switch letting active touches through might be fine for at-desk operation.


Thinking on this more, kinect-style cameras looking down the plane of the desk could probably do 'posture' analysis to sort out (un)intentional touches pretty easily.

That might be easier than even trying to develop pressure sensors and heuristics.


First of all does anybody seriously believe that anybody will be using the Surface to do the sort of work where shifting a layer one pixel to the right is critical? The mouse isn't going to disappear, and I have yet to meet anybody who seriously thinks it is.

As to styluses, it seems to be a fairly solved problem. Most designers I know use a Wacom tablet of some description. And those with a big enough budget use a Wacom cintiq (which is basically a huge touch screen).

As for things like selecting text and other in between tasks, I agree that current solutions aren't great. However I suspect people are working on it. My initial guess after having thought about the problem for 3 seconds is that some sort of clever context aware zooming and input scaling might be a good place to start.


> does anybody seriously believe that anybody will be using the Surface to do the sort of work where shifting a layer one pixel to the right is critical?

No, but the article gave me the impression that touchscreen + keyboard = the future of computing. I'm just trying to point out that the future of computing might involve one or two additional devices.

Of course the mouse isn't going to disappear. Many people buy mice with laptops nowadays, even though every laptop comes with a trackpad. The question is, will this trend continue with Surface-like devices, or is mouse sales going to plummet?


It's been a while since I've bought a laptop with a usable trackpad. For example I just got a Dell, a $1,500 Studio 15 not a crappy low-end one, and the trackpad is broken. I've even had it replaced but it still refuses to move the cursor about 50% of the time. Mice are preferable to every trackpad I've used, even Apple ones.

Of course this doesn't address things trackpads can do that mice can't, like multitouch gestures. But for pointing quickly and accurately, I always have a mouse paired to my laptop.


The Apple trackpads are large at 5+ inches and responsive and reliable. I love mine. I had multi-touch and gestures hacked into a Linux driver on my old Dell five years ago but it was never nice like this.

And Apple also makes mice that do gestures and multi-touch on the touch responsive top surface.

Gosh, I sound like a fanboy here but I really despise the fruit-based company for their evil corporate practices. You just can't beat their hardware, though.


You just can't beat their hardware, though.

I agree; the Macbook trackpad is literally the only reason why I don't have a Thinkpad.


PC trackpad quality has gone down the drain since everyone started to implement multitouch. Multitouch never works properly on cheap PC laptops, and the crappy drivers they bundle with such laptops make it a PITA to perform ordinary tasks like dragging and dropping. I could drag and drop more reliably on my 2001 laptop than I can on my multitouch-enabled 2010 laptop with the dreadful HP "clickpad".


That's a good point, you wouldn have expected mouse-using-holdovers to eventually give up, but if that's happening it's happening slowly. Still see people using mice with laptops all of the time. Could expect the same to hold true with the transition to tablets.


I usually get ridiculed for saying this, but I see Leap [1] (I'm not an affiliate, bastards [2] still didn't send me a developer kit) as complete replacement for the interaction.

Your hands don't even need to move from homerow to point around, precision is (supposedly) awesome, potential for interaction is in much greater space than any other input method, and your keyboard can be completely dummy (and let Leap capture where your fingers are, even adding pressure sensitivity check there), replace touch screen in the same way etc.

I just wait for a day where they'll just replace touchpad with it (it's even in similar hardware format) and OS' to provide a boundless desktop experience.

[1] https://leapmotion.com/

[2] I still love the guys


Well after my comment above about scaling resolution by moving back through the z plane, thank you for the link. It's really helpful.


I think you dismiss styluses too quickly here. You can be much more precise with a stylus than with a mouse. Just because the tip of the stylus isn't one pixel in diameter, it's probably the most accurate form of input you can get. That's why Wacom tablets are so popular.

Also:

> Even a conventional touchpad on a cheap laptop has better precision than your finger does.

A conventional touchpad...that you use with your finger...? Am I missing something here? How is a touchpad more accurate than touching a screen?


With most touchpads/trackpads, you can tilt your finger by a few degrees in any direction to fine-tune your aim before you click. This enables pixel-perfect clicking when you need it. (Use the physical button when you do this, because tapping will ruin your aim.) With touchscreens, you can't fine-tune your aim before you tap. You just tap, and hope that you hit the correct coordinates.


And with most touch-screens you can (un)pinch to zoom in and get as accurate as you want/need.

The entire desire for 'pixel-perfect' clicking on a static UI is a symptom of trying to shoe-horn touch into interfaces not explicitly designed for it. Has that ever worked?


> I think you dismiss styluses too quickly here.

No, he doesn't.

> You can be much more precise with a stylus than with a mouse.

Not with a touchscreen stylus, you can't. Touchscreens actually have a considerably lower hardware resolution than the screen itself.

> That's why Wacom tablets are so popular.

Completely different technology.

> Am I missing something here? How is a touchpad more accurate than touching a screen?

By having it input movement (at a configurable granularity) rather than position. That makes it a completely different input method.


The Surface keyboard has a trackpad. That just moves the problem from hardware to software though. How do you make a single UI that is equally fit for touch and mouse? I doubt that you can. But it probably doesn't matter, touch is going to win and we're going to be stuck with mice being a second class input.


> How do you make a single UI that is equally fit for touch and mouse?

Yep, that's the question I was trying to raise. Having a mouse in addition to a touchscreen not only looks redundant to those who don't need precision clicking, it also causes a potential conflict between different UI paradigms.

When touch is the primary input, every UI element needs to become big enough to touch with fat fingers, so you usually end up with UIs that look like they were designed for kindergarteners. You simplify and hide features until you upset every "power user". When the mouse is the primary input, on the other hand, the added precision allows you to cram more clickable elements into small spaces, possibly freeing up more screen real estate for other things.


Do most functions of most software require precision clicking? Why not just have mouse-specific UI for the particular tasks that benefit from it, and common UI for the rest? This would be similar to how some advanced programs that are basically mouse-driven still have keyboard shortcuts and sometimes even command line consoles, etc., for more advanced usage. I don't think you need an entirely separate UI to achieve this, you can just have certain special widgets, views, "power tools" etc. that act as "mouse shortcuts".


I find most software involves highlighting text. I use the ability to click the space between individual characters quite often. Finger touch plus adjustments with arrow keys gets the job done, but nothing beats the mouse at this for me.

When Windows 95 came out we all had fun trying all the crazy pointer themes it supported (hands, etc.), but I suspect we've all now settled on the fine-tip pointer arrow and I-beam.


I agree very much on this. There are some precision tasks that I do, such as audio waveform editing, circuit boards, photoshop image masking and so on that can't be done efficiently without a pointing device, or a surrogate. When using the MBP I use an MX performance mouse with the MBP to get some control over some of these processes. The trackpad, while useful for some tasks, simply isn't accurate enough.

I do hear many comments claiming that touchscreens can supersede mice, and even keyboards, but I don't think this is right at all. The form factor of a mouse can be made to very closely align ergonomically with the time proven accuracy of the artisans dexterous hand. Angular displacement at the wrist traverses a relatively larger sweeping arc than with a track pad, effectively increasing resolution at something like the square of the distance. Wacom styluses are also precision devices, and yet I would still suggest that they aren't as conformant with watch maker like skills as the ordinary extended hand.

Compared with high resolution devices, big finger tips and small area track pads are clumsy and add difficulty. I wouldn't want to design fine watch mechanisms with either.

It's essentially an issue of resolution over area. I can reduce the resolution of a trackpad to attain some accuracy, but in doing so reduce the effective area.

In respect of how to have both accuracy and the simplicity of touch, I'm inclined to think that making a gesture and moving the hand backwards through the z plane to regain resolution space would be one approach. We already have protected virtual keyboards. I would be happy with moving into a projected motion detection area to regain accuracy. Until then, I have no option but to use a high resolution location device, like a mouse.


It won't. Most of my work doesn't require precision pointing, but when it does, it really does.

I often think that the need for precision pointing in everyday basic tasks is a problem that will be overcome by the constraints of tablet design. The rest of the precision pointing can stay in their specific domains.

I would very much like a traditional drawing table that is a high res touch screen, with a mouse and keyboard, and a super accurate variety of stylus-like tools, with unlimited modeless multitouch. Without menus or gestures, I could use my eyes or just my sense of place and touch to find the tool I want from my own organizational method. Like the good old days. But even with that, I still would like a virtual cursor with a mouse-like pointer - it's just too useful.


It's been years people say that mice are going to disappear. Nothing recent really, even through the 80s a number of innovative solutions were designed and everytime someone was claiming it was the "future" of the interface.

Well, now we have computers which can be controlled reliably with fingers: tablets and phones. But don't expect to do text selection in a proper way with fingers. Of course, solutions exist in software but there are all inferior to using a good old mouse with 2 buttons.

And even if "touch" was as good as a mouse, it's just not physically speaking very effective. There's a lot of energy wasted to move your whole arm in several directions while a mouse only keep your wrist in movement. It is way more relaxing to use a mouse for long operations than touching your screen even if it seems more intuitive.

Tablets, phones, they work well because we use them casually, for short intervals. If you were to use them for a whole day to do work on them, you would switch back to a laptop/desktop with a mouse in no time.

There's no replacement coming for the mouse for those who work long hours with their computers, despite what the hipsters say.


I'm not sure this is a problem. Android supports precise text-selection via automatic zooming and start and end block markers, like this: https://www.dropbox.com/s/grmlsgqgqc5qqs6/Screenshot_2012-11...


While I agree that it is possible, I don't think this method (or Apple's) is anywhere near as efficient as a mouse. It will always be 20 times slower to select a part of a word, compared to a mouse, because you have to hide the word with your finger, see a popup of what's underneath, and then somehow select the letters.


It still isn't precise enough (well, not unless you're selecting monospace font type), and even ignoring that, it's a level of abstraction extra, because you don't just click and get where you want in just one step.


Stabbing at the screen with my sausage finger works fine for me, in fact it works so well that I hadn't thought about it until now. Of course, I don't code or write on my touch devices and when I do code/write I use a different machine (a laptop or desktop) and have no impulse to touch the screen. Different modes. I don't understand your comment about monospace font type.


I personally have troubles with either iOS or Android around small width characters in variable width fonts (like ',il.;:).

Fixed width font would help me greatly around those.


It is also extremely failure prone, bordering useless due sensitivity bugs. But that is fixable in theory.


> OK, let's assume that the Surface's keyboard completely solves the problem of not being able to write properly. That still leaves us with the problem of not being able to point properly.

The touch cover also has an embedded touchpad which you can use if you want to. I don't, generally, but it's there if you want it.


Manuel pixel level adjustments have always been difficult, even with a mouse. Place elements roughly by eye, and then have nudge buttons to position exactly. Have good support for snapping and alignment in a UI that people actually understand.


Detecting the eye's focus would be faster and more accurate.


Four buttons that nudge an element by one pixel is perfectly accurate. For most of us aligning things on screen is not an exact science. Drawing is a process of minor adjustments, not a single operation. There is scope for software trying to guess the intent, and ignoring the exact placement. This is difficult as snapping can be very annoying when it is not done right.


The Surface keyboard cove also includes a touchpad, so this particular problem may be solved already.


I had the idea to make a floating panel that essentially acts as a touchpad for tablets. However, current tablet APIs make this pretty difficult so it isn't possible just yet.


how about this?

  1. touch layer with finger
  2. hold
  3. the UI zooms in at the point of touch. pixels are now 1cm wide.
  4. move to desired pixel (or row/column)

If you move outside of zoomed area, UI zooms back, allowing you to drag the element anywhere on the screen, then zooms back when you stop dragging.

Looks doable, right?


That would work. However, it now takes five seconds to do something that previously took half a second. What I'm wondering is, if you can connect a keyboard to a tablet, why not a small mouse or touchpad too?


I have an Asus Transformer Infinity. I can connect a mouse to it. I get an actual pointer, right-click and everything.


I totally agree. I think the future of pointing is something Kinect-like with accurate on-screen feedback.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: