> How wonderful it is to flip open the Surface and quickly type a 4 paragraph email response when I need to ... And switching between the two modes of interaction – sometimes typing, sometimes touching – is completely natural.
OK, let's assume that the Surface's keyboard completely solves the problem of not being able to write properly. That still leaves us with the problem of not being able to point properly.
I can't even imagine how the touchscreen could ever rival the precision of the mouse as a pointing device. The average adult human finger is simply too thick to select 5 characters from the middle of a word displayed in 10 points, or to drag a Photoshop layer 1 pixel to the right. Even a conventional trackpad on a cheap laptop has better precision than your finger does, though good luck finding actual graphic designers who prefer trackpads to actual mice. Styluses (styli?) aren't much better, unless your stylus is sharp enough to damage the screen. The fact that touchscreens don't allow you to fine-tune your aim before you click makes it even more difficult to achieve precision.
How do we address this issue? How do we make touchscreen devices useful for those who need spatial precision? What would be the most natural way to add precise pointing abilities to a tablet computer without compromising the advantages of the touchscreen? Carrying around a cordless mouse doesn't seem to be a particularly elegant solution. What do you think? Is touchscreen+keyboard the future of personal computing, or is there always going to be a place for mice as specialty items for graphic designers and some other professionals?
The way people have already done so in touch software to date?
You program 'un-pinch to zoom' to zoom the desired elements allowing increasing levels of accuracy as needed. And in the cases that you need 'pixel perfect' accuracy [1] you simply include "bump" UI controls or expose explicit pixel coordinates that can themselves be altered to affect the desired movement of the layer or selection or what-have-you (something even keyboard/mouse UI usually offers).
Precision is a largely solved issue in touch software. The real problem that will keep mice around in a largely-touch-driven world, is the simple ergonomics of spending eight hours at a desk. (i.e. Gorilla-arm.) [2]
[1] 'Pixel perfect' is a concept that makes increasingly less sense as displays reach and exceed 300dpi. Pretty soon we'll all be dealing with vectors and things will be better for it. 'Pixel perfect' accuracy is of mere transitory usefulness until then.
[2] Barring the development of a drafting-table-style variant of the original surface and either some sort of flawless arm/palm/accidental-touch rejection or a switch from 'any' touch to 'explicit-object' touch.
e.g. the desk ignores all contacts except from a pre-ordained 'pen', 'thimble' or 'glove'.
I'm afraid this doesn't work in all cases. When working in a reduced physical area, irrespective of pixel count, zooming in and snapping to boundaries is counter productive. Audio wave editing for example is an operation on cyclic ( obviously ) information, and when zooming in as a means of rationalising location, important context is lost.
Imagine a time line with a periodic wave, interrupted only by a one or two cycle click. Zooming in to normalise the ratio of object to finger leads to very easily losing context. That is, relative positioning left or right is lost. So it becomes frustrating zooming in and out in order to get your bearings again. Even attempting this on a trackpad is quite difficult, when compared to high resolution mice.
There are many cases where it's much better to have a large display area, combined with a high resolution mapping to that area. I could edit waves on a postage stamp sized display with my finger if I put my mind to it. I don't think I would be as productive as on a tablet sized display though. In other cases I need to increase yet above that ratio. I'm afraid stubby fingers on compensating scaled objects is not adequate always.
It sounds to me like you're conflating "the trouble with touch" with "the trouble with too-small-screens" and deciding the problem is touch.
But I'm guessing you don't edit waves with a keyboard/mouse on a 3, 4 or 9.7" screen either. So maybe "touch" isn't the obstacle you're really battling in the situation described.
Also, haven't people long had solutions where a 'work area' is zoomed for precision selection/editing while one-or-more 'larger context' views are maintained (or operates on its own zoom level) in another chunk of the screen?
Well, keeping to this example, there are sometimes conflicting requirements. A transient with a long train decay, such as a crash cymbal or knock, explosion or gunfire for example requires a wide view to properly observe the full affect. The trailing decay can last for quite a while in this scenario. The optimum situation here is to include a segment before and afterwards, or perhaps even more than that, depending, as some modulation becomes clearer the more you zoom out, not in.
At the same time, operating on selected segments is more efficiently done with finely controlled hairline cursors, where an obscuring object like a finger doesn't contend, generally. After this of course, zooming and other means of fine control and selection come into play.
In scenarios like this it is very much a case of not being able to see the forest for the trees if making a representation too large.
There are solutions to the problem of precise location, which I think include touch and gestures, though not necessarily solely through touch. In practice I use the right hand for precise hairline location and the left to zoom in with gestures, zoom out again for context and then iterate.
I'm not arguing for mice over touch. I'm looking at precision. I always find it quicker to type on my Bluetooth keyboard than on my iPhones screen keyboard. The reason for that for me has just to do with the ratio between active elements. Keyboard keys are larger than my fingers, on screen keys are smaller.
I actually think that in some cases gestures in the z plane, as well as x and y would be a way of adding capability.
These opinions are based on having to give up the USB mouse in the field, using JTAG's and external drives in a two port only MBP. Using the trackpad leads to much longer work times, simply because it's a less precise device.
Let me start off by saying I was originally taking issue with the idea that touch precision is a problem. That it can't work in certain cases and that we'll always need mice. And all that in a complaint that demonstrated a pretty narrow understanding of what has already been done with touch interfaces.
It was never my intention to argue that touch is always the preferable interface for all workloads (something I tried to convey by pointing out how mice will remain relevant for quite some time, due entirely to day-long workloads).
As applies to your concerns, I was just trying to suggest that workable solutions exist, even if they'll always be less-than-ideal for larger quantities of work.
As to your specific concern, I still think a workable solution may be out there, even if it remains undoubtedly less efficient than a mouse and a larger screen.
e.g.
Wouldn't the sorts of drag and off-axis drag controls that are used for seek in many podcast/audio-player apps [1] address precision-selection in cases where too-much-zoom presents problems, and also obviate the concern about fingers obscuring the wave itself?
[1] click to 'grab' the selection-marker/nubby on the wave/timeline, drag across the x axis to seek and then down on the y axis to control the speed of seek -- typically doing more and more fine-grained seek for a given x-axis drag length, as the finger gets further from the wave/timeline
I came close to the drafting table experience with a pen-driven old-style a4 tablet (not a screen) that i had mapped to my screen coordinates. Hovering the pen over the tablet moved the mouse, and tapping clicked. At the time i felt it was a far superior way of working than a mouse. (Had to stop using it when i moved to the mac and couldn't find a driver). But even on that you would get gorilla arm from constantly moving your arm across the tablet, despite being able to lean on an elbow. I suspect a drafting table ui would suffer from the same fatigue issues as a touchscreen interface.
I would love a drafting table size touch interface. I think there's still a lot of space left to explore to find a way to reject "resting" touches. Beyond sensor / algorithm based approaches which might look at surface area or pressures, one could also go with some pretty simplistic solutions. Something as dumb as a foot bar connected to a switch letting active touches through might be fine for at-desk operation.
Thinking on this more, kinect-style cameras looking down the plane of the desk could probably do 'posture' analysis to sort out (un)intentional touches pretty easily.
That might be easier than even trying to develop pressure sensors and heuristics.
First of all does anybody seriously believe that anybody will be using the Surface to do the sort of work where shifting a layer one pixel to the right is critical? The mouse isn't going to disappear, and I have yet to meet anybody who seriously thinks it is.
As to styluses, it seems to be a fairly solved problem. Most designers I know use a Wacom tablet of some description. And those with a big enough budget use a Wacom cintiq (which is basically a huge touch screen).
As for things like selecting text and other in between tasks, I agree that current solutions aren't great. However I suspect people are working on it. My initial guess after having thought about the problem for 3 seconds is that some sort of clever context aware zooming and input scaling might be a good place to start.
> does anybody seriously believe that anybody will be using the Surface to do the sort of work where shifting a layer one pixel to the right is critical?
No, but the article gave me the impression that touchscreen + keyboard = the future of computing. I'm just trying to point out that the future of computing might involve one or two additional devices.
Of course the mouse isn't going to disappear. Many people buy mice with laptops nowadays, even though every laptop comes with a trackpad. The question is, will this trend continue with Surface-like devices, or is mouse sales going to plummet?
It's been a while since I've bought a laptop with a usable trackpad. For example I just got a Dell, a $1,500 Studio 15 not a crappy low-end one, and the trackpad is broken. I've even had it replaced but it still refuses to move the cursor about 50% of the time. Mice are preferable to every trackpad I've used, even Apple ones.
Of course this doesn't address things trackpads can do that mice can't, like multitouch gestures. But for pointing quickly and accurately, I always have a mouse paired to my laptop.
The Apple trackpads are large at 5+ inches and responsive and reliable. I love mine. I had multi-touch and gestures hacked into a Linux driver on my old Dell five years ago but it was never nice like this.
And Apple also makes mice that do gestures and multi-touch on the touch responsive top surface.
Gosh, I sound like a fanboy here but I really despise the fruit-based company for their evil corporate practices. You just can't beat their hardware, though.
PC trackpad quality has gone down the drain since everyone started to implement multitouch. Multitouch never works properly on cheap PC laptops, and the crappy drivers they bundle with such laptops make it a PITA to perform ordinary tasks like dragging and dropping. I could drag and drop more reliably on my 2001 laptop than I can on my multitouch-enabled 2010 laptop with the dreadful HP "clickpad".
That's a good point, you wouldn have expected mouse-using-holdovers to eventually give up, but if that's happening it's happening slowly. Still see people using mice with laptops all of the time. Could expect the same to hold true with the transition to tablets.
I usually get ridiculed for saying this, but I see Leap [1] (I'm not an affiliate, bastards [2] still didn't send me a developer kit) as complete replacement for the interaction.
Your hands don't even need to move from homerow to point around, precision is (supposedly) awesome, potential for interaction is in much greater space than any other input method, and your keyboard can be completely dummy (and let Leap capture where your fingers are, even adding pressure sensitivity check there), replace touch screen in the same way etc.
I just wait for a day where they'll just replace touchpad with it (it's even in similar hardware format) and OS' to provide a boundless desktop experience.
I think you dismiss styluses too quickly here. You can be much more precise with a stylus than with a mouse. Just because the tip of the stylus isn't one pixel in diameter, it's probably the most accurate form of input you can get. That's why Wacom tablets are so popular.
Also:
> Even a conventional touchpad on a cheap laptop has better precision than your finger does.
A conventional touchpad...that you use with your finger...? Am I missing something here? How is a touchpad more accurate than touching a screen?
With most touchpads/trackpads, you can tilt your finger by a few degrees in any direction to fine-tune your aim before you click. This enables pixel-perfect clicking when you need it. (Use the physical button when you do this, because tapping will ruin your aim.) With touchscreens, you can't fine-tune your aim before you tap. You just tap, and hope that you hit the correct coordinates.
And with most touch-screens you can (un)pinch to zoom in and get as accurate as you want/need.
The entire desire for 'pixel-perfect' clicking on a static UI is a symptom of trying to shoe-horn touch into interfaces not explicitly designed for it. Has that ever worked?
The Surface keyboard has a trackpad. That just moves the problem from hardware to software though. How do you make a single UI that is equally fit for touch and mouse? I doubt that you can. But it probably doesn't matter, touch is going to win and we're going to be stuck with mice being a second class input.
> How do you make a single UI that is equally fit for touch and mouse?
Yep, that's the question I was trying to raise. Having a mouse in addition to a touchscreen not only looks redundant to those who don't need precision clicking, it also causes a potential conflict between different UI paradigms.
When touch is the primary input, every UI element needs to become big enough to touch with fat fingers, so you usually end up with UIs that look like they were designed for kindergarteners. You simplify and hide features until you upset every "power user". When the mouse is the primary input, on the other hand, the added precision allows you to cram more clickable elements into small spaces, possibly freeing up more screen real estate for other things.
Do most functions of most software require precision clicking? Why not just have mouse-specific UI for the particular tasks that benefit from it, and common UI for the rest? This would be similar to how some advanced programs that are basically mouse-driven still have keyboard shortcuts and sometimes even command line consoles, etc., for more advanced usage. I don't think you need an entirely separate UI to achieve this, you can just have certain special widgets, views, "power tools" etc. that act as "mouse shortcuts".
I find most software involves highlighting text. I use the ability to click the space between individual characters quite often. Finger touch plus adjustments with arrow keys gets the job done, but nothing beats the mouse at this for me.
When Windows 95 came out we all had fun trying all the crazy pointer themes it supported (hands, etc.), but I suspect we've all now settled on the fine-tip pointer arrow and I-beam.
I agree very much on this. There are some precision tasks that I do, such as audio waveform editing, circuit boards, photoshop image masking and so on that can't be done efficiently without a pointing device, or a surrogate. When using the MBP I use an MX performance mouse with the MBP to get some control over some of these processes. The trackpad, while useful for some tasks, simply isn't accurate enough.
I do hear many comments claiming that touchscreens can supersede mice, and even keyboards, but I don't think this is right at all. The form factor of a mouse can be made to very closely align ergonomically with the time proven accuracy of the artisans dexterous hand. Angular displacement at the wrist traverses a relatively larger sweeping arc than with a track pad, effectively increasing resolution at something like the square of the distance. Wacom styluses are also precision devices, and yet I would still suggest that they aren't as conformant with watch maker like skills as the ordinary extended hand.
Compared with high resolution devices, big finger tips and small area track pads are clumsy and add difficulty. I wouldn't want to design fine watch mechanisms with either.
It's essentially an issue of resolution over area. I can reduce the resolution of a trackpad to attain some accuracy, but in doing so reduce the effective area.
In respect of how to have both accuracy and the simplicity of touch, I'm inclined to think that making a gesture and moving the hand backwards through the z plane to regain resolution space would be one approach. We already have protected virtual keyboards. I would be happy with moving into a projected motion detection area to regain accuracy. Until then, I have no option but to use a high resolution location device, like a mouse.
It won't. Most of my work doesn't require precision pointing, but when it does, it really does.
I often think that the need for precision pointing in everyday basic tasks is a problem that will be overcome by the constraints of tablet design. The rest of the precision pointing can stay in their specific domains.
I would very much like a traditional drawing table that is a high res touch screen, with a mouse and keyboard, and a super accurate variety of stylus-like tools, with unlimited modeless multitouch. Without menus or gestures, I could use my eyes or just my sense of place and touch to find the tool I want from my own organizational method. Like the good old days. But even with that, I still would like a virtual cursor with a mouse-like pointer - it's just too useful.
It's been years people say that mice are going to disappear. Nothing recent really, even through the 80s a number of innovative solutions were designed and everytime someone was claiming it was the "future" of the interface.
Well, now we have computers which can be controlled reliably with fingers: tablets and phones. But don't expect to do text selection in a proper way with fingers. Of course, solutions exist in software but there are all inferior to using a good old mouse with 2 buttons.
And even if "touch" was as good as a mouse, it's just not physically speaking very effective. There's a lot of energy wasted to move your whole arm in several directions while a mouse only keep your wrist in movement. It is way more relaxing to use a mouse for long operations than touching your screen even if it seems more intuitive.
Tablets, phones, they work well because we use them casually, for short intervals. If you were to use them for a whole day to do work on them, you would switch back to a laptop/desktop with a mouse in no time.
There's no replacement coming for the mouse for those who work long hours with their computers, despite what the hipsters say.
While I agree that it is possible, I don't think this method (or Apple's) is anywhere near as efficient as a mouse. It will always be 20 times slower to select a part of a word, compared to a mouse, because you have to hide the word with your finger, see a popup of what's underneath, and then somehow select the letters.
It still isn't precise enough (well, not unless you're selecting monospace font type), and even ignoring that, it's a level of abstraction extra, because you don't just click and get where you want in just one step.
Stabbing at the screen with my sausage finger works fine for me, in fact it works so well that I hadn't thought about it until now. Of course, I don't code or write on my touch devices and when I do code/write I use a different machine (a laptop or desktop) and have no impulse to touch the screen. Different modes. I don't understand your comment about monospace font type.
> OK, let's assume that the Surface's keyboard completely solves the problem of not being able to write properly. That still leaves us with the problem of not being able to point properly.
The touch cover also has an embedded touchpad which you can use if you want to. I don't, generally, but it's there if you want it.
Manuel pixel level adjustments have always been difficult, even with a mouse. Place elements roughly by eye, and then have nudge buttons to position exactly. Have good support for snapping and alignment in a UI that people actually understand.
Four buttons that nudge an element by one pixel is perfectly accurate. For most of us aligning things on screen is not an exact science. Drawing is a process of minor adjustments, not a single operation. There is scope for software trying to guess the intent, and ignoring the exact placement. This is difficult as snapping can be very annoying when it is not done right.
I had the idea to make a floating panel that essentially acts as a touchpad for tablets. However, current tablet APIs make this pretty difficult so it isn't possible just yet.
That would work. However, it now takes five seconds to do something that previously took half a second. What I'm wondering is, if you can connect a keyboard to a tablet, why not a small mouse or touchpad too?
He is right about the future of interaction being touch + keyboard. I have been using my Transformer Prime for about a year now as a laptop replacement. It doesn't do everything I ask of computers ( but then again neither would any laptop ), but it does have this great interaction where I can switch between windows with the Honeycomb switcher app ( replaces alt-tab ) I can swipe between tabs in the terminal and use the touch screen to scroll back text. It is still the best device for reading any E-mails and its good at writing all but the longest. However the
coup de grâce of it is that I can use it for 12 to 18 hours without even thinking about plugging it into something. I can leave it unplugged for weeks and come back to a decent charge on it, and to my knowledge the only time I have ever turned it off was when I flashed CM10 on it. If the surface can provide those kind of experiences then I think its just a matter of time until it becomes a standard piece of kit for computer users.
I couldn't upvote this hard enough. I've been using my Transformer Prime for 10 months, and it has changed the way I do many things. I still do my consulting work on a laptop, but almost all of my side-project coding is done from my Prime. I can sit on my couch, with my feet up for 6 hours and code - and like you say, still have 10 hours of battery life for gaming or email or web surfing thereafter. I couldn't think of a more comfortable development experience.
The money line of Atwood's post about the future, though, is this one:
And I'm beginning to wonder about my desktop a little, because lately I'm starting to I think I wanna touch that, too.
I don't know that individual touch computing on a desktop will ever be a good idea, but I've been thinking a lot lately that a 42in Android tablet would make a ton of sense. Think of everything you use a whiteboard for to the Nth degree. Gathering around a large touchscreen to communicate ideas, allowing others to manipulate parts of the screen with wireless keyboards, and providing office-wide or household announcements and reminders just makes a ton of sense to me.
Personal computing has already gone touch, it's time for group computing to go touch too.
That combined with touch and keyboard is the future. These devices will soon accept any part of our body as an input vector. They'll be able to identify multiple users and act accordingly. Look at what google is showcasing what can be done with a person's eye ball alone, if that works the way they've been promoting, we're in for something.
I cant praise the surface for having a touch screen and a keyboard, its been done before. Done by Microsoft, Acer, Apple, Sony, etc.
I just cannot imagine spending my entire day holding or reaching up to touch a device. I work at a desk. I code. I write documentation. I send emails. I don't see how touch fits into my workflow at all.
Touching my desktop screens would involve moving my hands to around eye level. I hold my phone/tablet a lot lower.
My real problem in the new world order is that tablets seem to have topped out at 10 inches. While that may be comfortable for many, my hand span is the same width as the screen and I'd prefer something a lot larger.
* Old Android version and Toshiba have no history of staying fresh with Android
* Tegra 3 based, and will be released just as Tegra 4 devices come out
I do like that is has an SD card slot, micro USB/HDMI etc. Sadly all they have done is take a mediocre tablet and scaled it up, and then delayed releasing it.
10/GUI is a proposed new user interface paradigm. Created
in 2009 by R. Clayton Miller, it combines multi-touch
input with a new windowing manager.
It splits the touch surface away from the screen, so that
user fatigue is reduced and the users' hands don't
obstruct the display.[1] Instead of placing windows all
over the screen, the windowing manager, Con10uum, uses a
linear paradigm, with multi-touch used to navigate between
and arrange the windows.[2] An area at the right side of
the touch screen brings up a global context menu, and a
similar strip at the left side brings up application-
specific menus.
I'm interested in looking at the new Asus Vivo Tab for this very reason. Though I'm likely not going to jump on the first generation and I'll let the MSFT App store fill up a little. I played with my neighbor's T300 that was recently upgraded to Jelly Bean. It was an impressive experience with the keyboard. Still might hop on that bandwagon.
I might, just might, ditch my laptop for a transformer prime running Ubuntu Mobile (or whatever it'll be called) if they do a decent job with their mobile OS. I assume it would be better as a tablet OS for a 'computer replacement tablet'.
I loved this article as it clearly demonstrates Jeff's love of all hardware. There is no real hatred for any brand here, and he complements all the major devices while remaining objective.
One comment I would make is regarding this, "I knew that the Nexus 7 was really working for me when I gave mine to my father as a spontaneous gift while he was visiting, then missed it sorely when waiting for the replacement to arrive.". I know that Jeff is above my pay bracket, but even still, this makes me wonder what sort of money stream this is, if he can simply hand away Nexus 7's. I earn a respectful living but that sort of money is still substantial enough that I can't afford to simply give one away.
1 high end laptop - $1500
1 netbook - $500
2 phones - $500 X 2
2 tablets - $500 + $250
1 ereader - $100
1 cool TV device - $250
accessories $500
Total: $390 per month
It's a lot more than I spend but it definitely isn't super rich level spending. If someone spent $390 owning a nicer car, a fancier kitchen, slightly better apartment, a Harley Davidson, etc. you wouldn't even think about it. I knew students that spent $390 more than me (I spent 0) on clothes.
To be fair, he also owns at least one desktop with several 30 inch screens and probably a nice TV. I admit, when you make it a monthtly cost it's not that high, but 390$ a month is still almost 5k a year, which means most people would be working full time for between 1 and 5 full months a year just to pay for gadgets.
Sure you can. You must spend, over the course of a year, spend E300 on nothing (fancy meal you forget about as soon as your eyes close that night, pints in the pub, that vacation you planned for a month but wasn't as much fun as expected). That could've gone in that give-away Nexus 7.
Maybe for you this is not true and maybe you spend consciously, but people usually don't do real personal money management; if they pretend they do, they usually kid themselves by leaving things out conveniently. I don't do it because I find it an utter waste of time, but I tried it for a year and I was amazed and frightened how much I spend on completely nothing and for no apparent reason than instant gratification. Which can be important, but too much of it is like an addiction.
What I do as gifts is go on Ebay (or something like Ebay locally) and buy broken devices. People who's Nexus 7 doesn't work anymore, who's iPad 2 croaked, netbooks / laptops died, TVs 'just not turning on'. We live in the throw-away age, so often even WITH warranty people cannot be bothered to have things fixed. Especially when they cost < $300 it's not worth 'going to the other side of town' for. And usually (almost always) it's simple to fix. So you buy something broken for $10, figure out what is wrong (in my experience it's almost always related to the power supply/battery) and fix it. Then give it as a gift. Effect is the same, it's a nice hobby (well I like it) and I can make more people happy than buying new things.
I've recently bought some financial software for my Mac, can't recall its name but it's pretty good. I can import my credit card statements into it and categorise all my transactions so I can see (and report on) what my money is going on.
Gives a great incentive to see where savings can be made
I didn't try to automate it; I just wrote everything down in a little book, even $0.50 at the grocery store and put it all in a spreadsheet at home. Unless you really pay everything with your CC (which here, in Europe, would be very strange), you would miss things, or at least not know where they went.
For instance; I get $100 from the ATM, I pay part of a business diner with that for $90, then I have $10 left. The $100 is registered on my CC so $100 goes into my business account, while $10 I spent on some magazine at the train station while I was bored. The $100 for the business would be; 'wow we got the client, well spent!' while the $10 went into the black hole as I won't ever touch that mag after getting off the train.
Not sure why, you don't need to be rich to want a good chair when you spend hours on end in one. Of course you do need a bit of disposable income, but it's a question of priorities and necessities. I've been foregoing cars for a decade now because I try to live close to work in places where there is sufficient access to public transportations, it's a hassle for some things (going back to the family, fetching "big" groceries) but it avoids other issues and saves money which I can spend on more books or a better chair. I got myself a humanscale liberty a few years ago, I found it a good investment and amortized over the years I've owned it, it hasn't been that expensive. More expensive than getting a $20 chair every year, but it still looks and feels mostly brand new.
The world isn't divided into "poor" and "rich". A chair for a programmer, particularly an independent one, is one of the tools that you need to work effectively.
A friend of mine is an independent auto mechanic who has been in the business since he was 15 (about 20 years). He now owns a small shop, and probably nets a good "salary" similar to many IT professionals ($90-150k). Not rich, not poor.
Guess what? He doesn't use a Craftsman mechanics toolkit from Sears. He has an long-term investment in high quality tools from vendors like Snap-On to the tune of $50-60k. Cheap tools would cost 1/3-1/2 that, but guess what -- a broken wrench means that he isn't working, and his time is worth $90/hour, and the high-end vendor comes to him, so he doesn't waste time in stores.
As a programmer or IT type person, your brain is your primary tool, but it depends upon the working order of all of the supporting tools in your body to perform optimally. When your job involves sitting on your ass for most of the day, a proper chair is just like a $75 high quality wrench for a mechanic -- a good investment.
My buddy in the construction business spends way more on his tools than I possibly could as a software engineer, even if I bought a new Aeron chair, latest Mac Book Retina, and a 30" display each year.
Perhaps "well off" was the wrong term. I simply mean that a ~$1000 chair is considered a "luxury purchase" that many aren't able to make. Even as a programmer who sits for the majority of the day, there are many other priorities that tend to come first and you would need a fair amount of financial comfort to make said purchase.
But I feel it's not appropriate to speak of how someone else should be spending their money. I regret the initial comment.
AKA. "I can't afford to buy cheap". I also think that bed mattresses are worth investing in, since you spend 1/3 - 1/4 of your life in it, and being well-rested helps in being productive.
I really get that sentiment—there are an awful lot of ways to be penny wise and pound foolish in this world. You can go through a lot of $150/pr "dress shoes" before you even need to resole a good pair, for instance, and if you need to be dressed up (for work or even for frequent meetings) you can go broke saving money. And trying to save money on your tools (and for a full-time coder, even a chair is a tool) is going to cost you in productivity over the long haul.
On the other hand, I've found that there's nothing better for my back (or my sleep in general) than a properly-inflated air bed (and I had severe back problems for years). (It started as an interim solution when I beat my furniture to a working location by nearly a week.) That works out to about $30 per annum around these parts, and although there are "real" versions with long-term guarantees and fancy fabric coverings (a standard mattress pad gives the same feel) the price differential isn't an incentive to give up the "replace annually" variety.
On the other hand, don't discount cheap ones simply because they are cheap.
I got my old memory foam mattress from Woot. It made me sleep so soundly that after a couple years, I started waking up with back pain. Slept all night, but woke up in pain. Gave it to my Mom (best sleep she's ever had, too) and bought a hybrid from Ikea. Still way better than any other boxspring mattress I've owned, and ridiculously cheap.
So yes, they are worth investing in... But that doesn't mean you have to pay a lot.
Sadly, I cannot find an Ikea chair/couch that's comfortable. Still working on that purchase.
I have only one tablet, if my dad asked (thankfully he does not read HN), I would give it to him immediately. Being 'well off' is not synonymous with generosity.
Depends on the person, a few years ago I spent over $10k in a year on other people[1], I bought people iPhones, TVs, laptops, flights, all manner of things. I was only making $100k/year at the time. There are people making 5x that who would never spend that much money on other people... it's entirely plausible someone could be making an average tech salary and have no problem arbitrarily handing out "expensive" things if they don't really care about money.
[1] I got over this awful habit and no longer do it, thankfully.
According to the profile, you live in Ireland. PayScale says your salary should be at least in the 50.000 euro range.
I think you could definitely give a Nexus as a gift to your father if you wanted to.
It depends on whether you think it would be a gift he'd appreciate and use, I guess. I've given a laptop as a gift to my grandmother, and a Kindle as a gift to my uncle (we're very close), on a salary of U$ 10.000 yearly.
In both cases the devices were used extensively, and it was something they wouldn't have bought or couldn't have afforded themselves, and it made me very happy to see them use the devices (especially my grandmother, everybody was betting she wouldn't learn how to use it).
This post feels too short for the argument he is pushing. If the surface is a laptop killer, what is radically different about it compared to the other countless tries of putting a keyboard with a touchscreen ?
I am asking genuinely, because I haven't touch or seen one yet. So far the reviews aren't stellar, there seems to be the same shortcoming as before (always moving between the keyboard and the screen to type and click on things), and the software doesn't seem to push the limits of what you can do on a tablet.
My question would be, my parents hated using an iPad with a keyboard, would they be better off with a surface?
>what is radically different about it compared to the other countless tries of putting a keyboard with a touchscreen ?
It is not one thing, it's multiple:
1. Software. In iOS/Android keyboard support is limited, most of applications are not designed with it in the mind and some don't work at all because of faked input controls instead of native. Windows superior in keyboard support over iOS/Android and in touch support over own previous versions.
Also almost full Office suite on RT is not such joke as any editor on iOS/Android.
2. Integration. Power is supplied over connector, I2C has lower delay than bluetooth and full n-key rollover support, there is automatic lock of screen orientation based on accelerometers in both keyboard and tablet and so on. It's impossible to leave it at home by accident.
3. Quality. Surface's keyboards are thin and better in quality than most of competitors. They have almost standard layout and key size.
>My question would be, my parents hated using an iPad with a keyboard, would they be better off with a surface?
Nobody knows. It's depends of reasons of their hate. Surface's keyboard is still compromise even if better than iPad.
i’d say the big differentiator here is the software. iOS isn’t optimized for keyboard input (and, as far as i know, doesn’t support mouse input at all). Windows RT however, being essentially the same as Windows 8 in this regard, is efficiently navigable via keyboard and mouse input alone. so beyond having the physical keyboard there to replace the on-screen one for typing-out long messages, it also serves as an effective navigation tool and with all the traditional Windows shortcuts available. the Surface covers also include dedicated media keys and have function keys assigned to the newly introduced charms as well as full trackpads.
an iPad with a physical keyboard tacked-on actually degrades ease of use as the addition changes the form of the device while not providing an alternative means of interacting with it suitable to that form. as a result, beyond the single action of typing, all navigation becomes less efficient as you have to reach-out and touch the screen in a less-natural way than accustomed to. i've found that even the typing experience here doesn’t feel that much better as you still get interrupted when having to make what would be otherwise minor adjustments (moving cursor, selecting text, formatting, ...) by having to reach-out to the screen.
on a Surface though, the cover provides a complete and efficient alternative means of input. this has less to do with the particular implementation of the cover/kickstand (which is great by the way) but rather, that it runs essentially the same OS as the majority of (non-touch-enabled) desktops and laptops to be sold in the coming years. regardless of particular views on Windows 8 and whether it is less efficient than previous versions, it still is a fully functional traditional OS, and interacting with it as such works very well.
using a Surface for a few days now, i really do think the form factor lives-up to the claim of no-compromises. you can navigate entirely via touch or entirely through the traditional mouse/keyboard combo and not even know that the other existed. and (the most likely option) somewhere in-between works exceptionally well. while i traditionally use the cover while in the desktop or typing-out a long message (like now), i find the input method option it affords is also great to have for when i just happen to be sitting at a desk or want to use the device standing-up on my lap. in that sense the particular implementation on the Surface’s cover/kickstand really shine as it makes switching between form factors effortless.
What it comes down to is that eventually it's better to have 1 thing than 2 things that serve a similar purpose. Many (most?) computer people today have 2 things because the gap is too great. Eventually the gap closes a little, and although it will never close completely, it is better to have 1 thing than 2 things, so you make compromises. I think that is what Jeff is saying here, the gap is closing just a little, and eventually a tablet becomes good enough as a PC.
Yes, exactly. What often happens to me now is 1) I do something on my tablet...an email comes in, I start to respond, then I get in a little deeper and need to research something for a proper response....then 2) I reach for my laptop to finish off the email, likely involving the task of referencing other documents/web pages, to compose the intelligent reply. Better multi-tasking and quick access to a keyboard on a device improves this common use case scenario.
Also, when I travel, I have to take both a laptop and a tablet. As tablets become a little richer in functionality, I could see less of the need to take both with me.
I think the fundamental premise is actually wrong. It often is not better to have a compromise device than specific devices. Can you get along with a compromise device? Sure, sometimes. But the difference between no compromise devices can be surprisingly large in my opinion.
Most people I know haven't given up having a saw and a screwdriver and a pair of pliers just because they can buy a multitool that "does" all of that.
I also feel that a tablet has far more in common with an ebook reader or a smartphone than with a computer. I've yet to see a tablet do a task that a smartphone can't do, and I have to wonder if the "phablet" (god do I hate that term) won't be the convergence you're looking for rather than a convergence with a laptop.
I have yet to talk to a laptop user who wants to trade it in on something that isn't a general purpose computer.
That's a much better argument. Tablets are sofa consumer devices; notebooks are desktop work devices. Surface is trying to bridge that gap. But we should not kid ourselves this is not a laptop replacement, it's a tablet improvement.
Well of course this is true if you don't have an OS designed with touch in mind, and you don't have hardware that supports these interactions. I wasn't a believer in this interaction model until I went to the Microsoft Store last week and played with all of the Win8 devices. Now I want all of my screens to have touch on them.
What about doing actual work? This revolution of touch, which has come to desktop with Windows 8 and Ubuntu, scares me a bit. First because I feel that professionnals that use computers daily are not really considered and second because with touch devices, the emphasis is on consumption rather than creation.
Even as a life-long Mac guy, I bought a TabletPC in 2005 because I wanted a pressure-sensitive drawing tablet that was embedded into the display. The most striking thing for me was how nice it was to be able to sit on the couch and read a webpage in portrait mode, but having to unfold the keyboard to be able to go to a new URL (or hunt/peck on the onscreen keyboard) was a disaster. When Nokia announced its pocket-sized tablet in 2006, I was very tempted to buy one until I realized it wasn't pressure sensitive.
Drawing on a digital device with an infinite color space is awesome, but being able to tangibly interact with information is a phenomenal achievement. Hell, I even built a homebrew version of the Microsoft Surface (table) to explore the possibilities.
In spite of my enthusiasm over the last decade, the tablets I've seen all feel like they slow me down. Not only are they computationally underpowered, but they're just slower to interact with. Moreover, I worry about the ergonomics of it all. My fingers tend to feel a bit chaffed if I spend too much time with a tablet. I wonder if others have this issue.
I love the idea of the tablet. I want to love the execution, but nothing I've seen has made me want to integrate an iPad/Surface RT into my life. My MacBook just works better for me.
Does anyone here also feel like an outsider because you - no matter how other people love it - just won't get used to touch (or for some reason you don't want to) ?
I wouldn't mind my current notebook [1] having a touch display (additionally), but i still wouldn't use it that often i guess (and i wouldn't pay extra for it).
I really have to try the Keyboard of the Surface though, but knowing me - very picky about keyboards - i don't think i could work with it.
Case in point: I have very dry skin. The tip of my fingers will crack and bleed unless I use a copious amount of hand cream. As a result, every touchscreen I use becomes a greasy mess of smudged fingerprints. I also hate scrolling and zooming, because rubbing a finger across several inches of plastic irritates my skin much more than simply moving a mouse around and clicking from time to time.
The thing that frustrates me most about these devices with keyboard covers with or without kickstands is that I can't sit on the couch and use it on my lap. It isn't a LAPtop replacement, it is a portable computing replacement.
You can use it in your lap: you just have to use the on-screen keyboard rather than the keyboard cover.
That would probably not crimp my style because when I am too tired to get off the couch and move to a table or desk, I am also probably too tired to do a satisfactory job at writing anything longer than a tweet.
While I personally love touchscreens and want to see them everywhere, I think we can't ignore the limitations of this type interface. For instance, it would seem that Bret Victor doesn't like Jeff's touch future, and has some interesting things to say about it:
I recently got the ultraslim logitech ipad keyboard cover and it is pretty awesome. I could really see a future for tablet+keyboard devices for a lot of people. Apple doesn't really need to promote this as their core feature because it's just an optional setup, but MSFT could propel the tablet keyboard market forward in very interesting ways. Very cool.
I think I'm starting to "get it", in that Windows 8 (perhaps moreso than Surface) may have a big effect on how we use computers.
After using a Surface for a day or two, I've caught myself a few times trying to tap on my other LCD screens. For some reason, this didn't happen even after using Android tablets, even with docking keyboards.
I spend boatloads of money (this is Brazil) in an ipad telling me it was a productivity device, but its really a consuming device. The more I use touch devices, the more I realize the superiority of mice: I really hate how my finger gets in the way on the ipad. Meh.
My mother still cannot use the iPad I bought for her. She still uses a laptop, but her typing is atrocious. It's not her fault, she wasn't trained to use a QWERTY keyboard like the rest of us and she's an immigrant.
She can however, speak somewhat understandable English. As we move toward devices that fit into how human beings naturally interact, I think a necessary evolution is going to be voice recognition, but not as we think of it. I don't think it'll be as clunky as it is now, but much more like giving commands and discussion to another human being.
I could be wrong. More kids text message now than ever use their cell phones, so perhaps a keyboard is necessary as the article says.
I welcome this new type of device - a small touchscreen with an optional keyboard (and a mouse?) - warmly. But I would like the future to happen with open systems, not some US corp acting as a gatekeeper for all my content.
I've had an 11" laptop/tablet with a touchscreen for a couple of years now... (Gigabyte T1125N; I didn't even realise it was a tablet when I bought it)
The novelty of the touchscreen wore off after the first month.
For almost 4 years I've had and still do as my primary computer a Tablet PC (HP 2730p), no touch, but with stylus. I still use it and will probably use it in every aspect of it until it dies on me.
Touch is maybe cool and natural and whatnot, but stylus is useful. I can actually account for how much money stylus option had brought to my wallet.
Until it died a couple of years ago I had an IBM Thinkpad x41 tablet PC for several years. I loved that thing and switched between tablet and laptop mode all the time when using it. I now have an 11" Macbook Air, and while in many ways it is the best laptop I've ever owned, I sorely miss not being able to swivel the screen around and turn it into a tablet. My next laptop is definitely going to be some variation on the theme tablet/laptop hybrid.
Does anyone have a good description of the keyboard he describes? I haven't seen any details so I don't know how the Surface works, I didn't very much understand the post.
It is more then just adding a keyboard. All the interactions in surface are first class keyboard friendly. Switching, closing app, switching tab etc. work well with keyboard. So you have a device which supports multiple usage perspectives either working on a 5 page word doc/blog with keyboard, touchpad or browsing web, watching movie, playing angry birds on a light weight touch enabled tablet
Likewise, Asus' eee Transformer Prime has been around for a while now. While somewhat pricey for an Android tablet it's still a lot better value than the Surface.
I have an external blue tooth keyboard for both my iPad and droid phone. Not having it integrated into the device is clunky, at best. Especially considering my keyboard and stand take up more space than the tablet itself. It's not elegant. Would love to see Apple come out with a sexier, more integrated, slim keyboard cover. Also, multi-tasking on the iPad blows, hard. It's better on my Android phone.
My neighbor got an Asus Transformer (the T300 model or what ever it was called). It's definitely slick and I could type quite well on it. Can't wait to see how the Win 8 RT app ecosystem plays out, because I like a lot of things MSFT is doing with their multi-tasking and side-by-side apps. Having good multi-tasking hot-keys on the keyboard is a big boost in productivity. Get me a good RDP client on it and I could find it very useful for a lot of work scenarios.
It also depends on execution. However bad surface RT is, I'm sure it won't be as frustrating to use as an Asus transformer (both the initial version as well as the prime). I've used both of those for a week only to return them without 2nd thoughts.
edit: The Logitech Ultrathin Keyboard cover is sort of similar, but because the iPad doesn't have a kickstand, you lose keyboard real-estate to make the whole thing balance. Also, you lose the 'fold open/closed' movement. And it needs its own battery and hence another wall-wart. Also: no touchpad.
Surface (with its kickstand) + thin keyboard cover is an advance on that. You've got something that folds open and closed like a laptop, but is about the same thickness as iPad+smart cover.
I remember many strikingly similar designs as these "new" concepts of Windows 8 tablets being presented 10-15 years ago as new concepts of computing devices powered by Windows CE Handheld PC Edition (with some of them being actually manufactured then).
While I agree with your general point, it is worth remembering that in the last 10 - 15 years battery life, processing power (in small devices) and the capacitative touch screen have improved dramatically.
It's a little bit like comparing the first truly practical cars (Model T vintage) to the very first cars and saying that they haven't done anything revolutionary.
- not everybody can buy like Jeff all the new devices and get rid of the ones that don't stick, that's rather a special way to choose devices.
- I like Jeff's reasoning with the upsides and downsides of a lack of keyboard: the keyboard sort of gets in the way to do spontaneous things but for now the non-physical keyboards are not good for writing and editing long chunks of text. So it makes me wonder: wouldn't a tablet with a stylus and some good hand writing recognition software beat? After all, you can see a touchscreen as a better mouse - an evolution of the mouse, if you want. And if you want to write some text, like an email, doing it with your hand writing doesn't look too bad. Maybe it will really beat the keyboard for writing in Chinese or Japanese (disclaimer: I can't write yet in those languages). Of course if you want to use those devices to write code, it will be damn hard to implement the equivalent of the keyboard shortcuts for an hand-writing system.
> Maybe it will really beat the keyboard for writing in Chinese or Japanese
It seems very unlikely, at least with Japanese. In my experience, a typical Japanese native speaker (who is accustomed to the computer) can enter text with a keyboard an order of magnitude faster than they can write it. The difference becomes a bit less stark with tiny keyboards on phones or whatever, but even there, most people these days are a bit shaky when it comes to writing complicated kanji anyway... ><
Writing has other benefits of course, e.g. that you can easily enter a character which that you don't know the pronunciation of (which is why it's commonly supported for dictionaries), so it's a useful feature to have even if it's not the primary input method.
No physical keyboard doesn't always mean immediate large-scale writing failure for all people. I think some people either just get used to it for extensive prose, or don't.
In Japan, the keitai-novel - novels written by young authors often entirely using featurephone numpads - has gotten past its initial craze and appears here to stay as an established genre. Even for other genres, there has been a recent example of a self-published science fiction novella scoring high in the just-started Japanese Kindle/Kobo sales rankings, having its majority written on an iPhone.
I imagine the general populace will gradually get accustomed to using a non-physical-keyboard device for extensive text input, especially as more kids emerge experiencing the touchscreen as their first and only input device. (And the aforementioned author wasn't even that young - IIRC he's around 40!)
I find the same thing with writing anything longer than a couple of sentences, and that's the major reason I've stuck to a laptop.
I love my phone - and I could see me wanting a tablet as well as my laptop, but I definitely want the laptop, so the tablet becomes an "If I have the spare money" item.
In the six months since I got a tablet, it's my laptop which has become superfluous. I also have a nice desktop setup, and there is now no reason for me to use the laptop. When I'm out and about, I can count on one hand the number of times I thought it would be a good idea to bring the laptop since I got the tablet, and I never actually used it. A good ergonomic desktop setup for productivity and a tablet and smartphone for mobile use is all I really need. If I regularly had to be productive in multiple locations without a desktop handy I might feel differently, but I don't.
If the world moves to touch computing, what's going to happen to PC gaming? Traditionally, PC gaming has been popular because PCs get used for many other things besides gaming. If most people switch to touch devices for their primary computers, will there still be enough people building gaming rigs for developers to care about that market? I'm sure the big console manufacturers would love nothing more than to move everyone over to consoles, but this would signal the downfall of indie games, certain kinds of first-person shooters, real-time strategy games, simulators, and many other genres. I would be devastated to see this happen.
One thing I never see mentioned regarding tablets is what happens if you suffer from RSI symptoms. Moving your hand around the screen all the time or typing on hard surfaces sounds like a ticket to more pain.
It seems like with a keyboard "dock" the Nexus 10 might be the lightest "retina" laptop available at 25% of the price as the new 13" Retina Macbook Pro.
I know, I'm waiting for the refresh. All told it's $200 extra counting the keyboard dock but I consider that totally worth it, especially since there's an extra battery pack in the keyboard.
I played with the MS Surface RT yesterday at an MS kiosk in a mall. I really liked the keyboard option. It was the first touch device I've seen that made me consider buying a tablet. I didn't like it enough to buy it, but it is an idea that is definitely headed I'm the right direction. I can't wait for technology to mature a bit more.
Does anyone make a multi-touch keyboard? This seems like it would be a great input device for performance users (e.g. programmers). And by "multi-touch keyboard" I mean a keyboard with physically pressable keys that, when not pressed, lay flat and can detect finger locations.
Apparently the TouchStream was beloved by its users before its designer, FingerWorks was bought by Apple along with all their patents and the device was chucked down the memory hole:
A coworker has one, but I don't actually know who made it. I asked once, but I think they said it had been discontinued. I'll try to remember to ask again tomorrow.
OK, let's assume that the Surface's keyboard completely solves the problem of not being able to write properly. That still leaves us with the problem of not being able to point properly.
I can't even imagine how the touchscreen could ever rival the precision of the mouse as a pointing device. The average adult human finger is simply too thick to select 5 characters from the middle of a word displayed in 10 points, or to drag a Photoshop layer 1 pixel to the right. Even a conventional trackpad on a cheap laptop has better precision than your finger does, though good luck finding actual graphic designers who prefer trackpads to actual mice. Styluses (styli?) aren't much better, unless your stylus is sharp enough to damage the screen. The fact that touchscreens don't allow you to fine-tune your aim before you click makes it even more difficult to achieve precision.
How do we address this issue? How do we make touchscreen devices useful for those who need spatial precision? What would be the most natural way to add precise pointing abilities to a tablet computer without compromising the advantages of the touchscreen? Carrying around a cordless mouse doesn't seem to be a particularly elegant solution. What do you think? Is touchscreen+keyboard the future of personal computing, or is there always going to be a place for mice as specialty items for graphic designers and some other professionals?