Hacker News new | past | comments | ask | show | jobs | submit login
My grandfather’s iPad (dispatched.ch)
56 points by preek on July 18, 2010 | hide | past | favorite | 99 comments



I really don't get the confusion about clicking. For me, right-click as used in Windows and KDE (at least KDE3) enabled a massive amount of ignorance. I never know how to do anything and it doesn't matter. Want to know what something does and what you can do to it? Right-click and look at the context menu. Left-click doesn't do what you expect? Right-click and look for the command you want.

I swear that's all I ever knew about KDE3, and it was almost always enough. It usually works in Windows, too. Just right-click!

But, I tried to teach my mom about that, and it just never took. "How do I do open this file with a different program?" "Did you try right-clicking on the icon?" "Oh, there's the 'open with' command." "Always try right-clicking. Right-clicking is magic!" "Okay, I'll try that next time." "I'm not kidding! Write it down! It's magic!" Two days later, "How do I rename a file on my desktop?"

I really don't understand how it's a mind-blowing concept. There are only two buttons on the mouse. (That she knows of.) She uses the mouse every day! She knows how to use more than two buttons on her camera, and she uses her camera once a month. I just don't get it. Why are the buttons on the mouse harder than the ones on the camera?

Even weirder is that she is capable of learning a lot of specific tasks. She learned how to "Open With" and how to "Rename"; it's been a long time since I had to help her with stuff like that. To me, it seems easier to remember the single rule, "Right click and look at the context menu," than to learn all those specific tasks without understanding the general rule.


The mouse is an unnatural abstraction. Just because it was better than everything else at the time - and it was - doesn't make it anything other than a spooky indirection.

We get this kind of thing because our superpower is dealing with spooky indirections and weird abstractions. We are not normal.


What is better than the mouse, though? Touchscreens probably only work in your lap, which is not the most ergonomic position for the majority of work. Touchscreens on a monitor are probably too straining since you would have to hold your hand upright all the time.

Not saying the mouse is forever, but I don't yet see the better alternative available for the mass market (maybe those brain wave scanners?).


This is a case where, I reckon, if the mouse is the answer, we're asking the wrong question. What's better than the mouse? Not shoehorning everything we want to do with computing into the form factor of "a computer".

OK, this is going off into left field, and I really should blog this, but take the iPad. For what the iPad does - video watching, web browsing, reading extended text, certain classes of creative apps (eg http://hexler.net/software/touchosc) - it's easily the best thing on the market. It's the best Web content consumption device anyone's produced, ever, hands down, which means it's the best television and printing press ever produced.

But it's not a general-purpose computer. In fact, it's those things because it's not a general-purpose computer. It's in the notebook niche, and by that I don't mean clamshell, I mean spiral-bound; it's the nearest we've yet got to technological paper. If it tried to be a general-purpose computer, it would be much less good at the things it's good at.

That points to the ugly truth of our industry - and let's face it, it's something we all know in our hearts:

General-purpose computers suck.

They really do. They're a kludge. They're a set of dodgy tradeoffs, due to economic and technical limitations. They're unintuitive and fragile. They're expensive. They're rapidly obsolescent.

The computer I want to use for programming isn't the same as the research-and-content device I want to use for writing isn't the same as the piano-keyboard-and-touch-screen-and-dials device I'd want for writing music isn't the same as the book-a-like I want to read on before I go to sleep. One device can't be good at all these things.

When it comes down to it, the mouse is a hack to make a general-purpose computer sort of acceptable for many of these things. It's a genius hack, but it's a hack.


Special-purpose computers create a lot of clutter and confusion. Life before smart phones was pretty lame: you had a phone, a media player, a camera, a notepad, a GPS device.... Well, actually, you didn't have that stuff, especially when you needed it, because you left it at home rather than have a bunch of devices banging around in your pockets. Special-purpose computers are actually becoming less and less common as general-purpose devices like smart phones and tablets displace them from the mobile space.

What we need is not special-purpose computers but general-purpose computers specialized by form factor, running applications that are optimized for the form factor and its style of interaction. Sometimes you want to sit down and type a ten-page report or reply to dozens of emails. Sometimes you want to lounge on the sofa and watch YouTube videos. Sometimes you want to lounge on the sofa with someone and watch a movie on a big screen with a big sound system, and you want your hands free for other activities.

I know that sounds really dumb and non-visionary because it describes exactly what we have now, but what the heck. It's a beautiful world. I also know that by saying "It's a beautiful word" I'm destroying any usability credibility I might have, because actually all normal people are suffering terribly, but again I say, what the heck.


Your first paragraph falls under the "Pocket Exception" (proposed by Jesse Schell). If it's in your pocket, you want it to converge. If it's not in your pocket, its going to specialize. To steal his example: the swiss army knife is a great pocket device, but you'd never use a giant swiss army knife in your kitchen.


You must not live in an apartment :-)

Have you seen the number of buttons on a modern rice cooker? Some rice cookers incorporate pseudo-pressure cooking, and some are even genuine pressure cookers. There are also combination convection/microwave ovens, and I love my combination immersion blender and food processor precisely because it serves two purposes in the same space as either one. It doesn't crush ice or blend as easily as the old stand-alone blender I had, but I compromised on features to save space. (It also has fewer parts and is easier to clean than the stand-alone blender or the dedicated food processor I used to have, so convergence didn't compromise usability.)


So then maybe the pocket exception should be renamed space exception, and now it explains Japanese appliances well.


I disagree - I actually want my Computer to do as many things as possible. I don't want to clutter my house (or my purse) with a multitude of devices.

I think it is a fallacy to conclude that just because Windows sucked, all computers suck.

Also I don't see the iPad replacing computers yet. I don't own one, so I am not even sure I consent to it being the best device for watching movies - the screen is small, and presumable I have to keep it on my body at all times to have a good viewing distance.

Also, I don't understand your argument that it is the wrong question. Apparently there still are tasks that require a mouse - how do you suggest replacing them? That there are tasks that don't require a mouse doesn't invalidate the concept of a mouse.


While you say you disagree, it sounds to me like you two are in agreement: one device doesn't fit all needs. Tablets can be fantastic. That doesn't in the least bit mean that "computers" as we've traditionally known them will or should go away. It's just that their role is somewhat reduced and clarified, empowering new types of device.


Touchscreens on an upright monitor would be too straining on your arms as you hold them up. Touchscreens placed flat would be too straining on your neck as you look down. Finally, your hands get in the wa of the screen.

Theres plenty of reasons not to prefer touchscreens as an all-purpose input device (though I DO like touchscreens...).

PS: Yesterday, I used an ipad for the first time. It left a pretty good impression. Yes, it has the hands-in-the-way problem and I wouldn't want to type on it for too long, but as an I can use it to read stuff and interact with it quickly with my hands when I need to device, it seems to work very well.


For me, there's a bit of a natural distinction: "workstation" tasks need a big screen and, at least for the currently foreseeable future, a mouse. "Casual" tasks need something personal, intimate, and "touchy". The world will benefit from having both.

This isn't to say that all tasks can be easily assigned to one of these categories, at least without further parameters specified: fingerpainting with Brushes and compositing 3D renderings for a motion picture are "the same task" in a sense. But each fits nicely with one device instead of the other. And, of course, there'll always be a gray area.


This is something I noticed when teaching my dad to use a computer; when I said "right" click, he always press the button with his index finger and will continue to do so until I tell him he have to press using the middle finger.

So my guess is, it has something to do with the fact that people are used to using their index finger to press something. Multiple buttons on digital camera are pressed using the index finger. Buttons on a TV remote are pressed using the index finger. Right click? Press the button? The intuition kicks in, and the index finger is used.

The "right click" that involves alternate pressing apart from what they are used to do so daily might be unfamiliar concept to those people. A vague, unfamiliar concept that is not intuitive to them are more likely to be forgotten. However, a set of rules for a specific task might be easier to remember even though it is unintuitive. A radio dashboard, for example.


So you don't understand that someone who was never bought up with computers, big screens, or even buttons can get confused with new technology?

What I don't understand is all the people who haven't quite grasped the fact that people are different, and think about the world differently. While right clicking would be second nature to many of us, a lot of people still don't "get" computers, and a lot of what happens is magic to them.


What's the distinction, though? What's the difference between right-click and all the other things she can do?

She can work her digital camera. She can work her cell phone. (She even sends text messages now!) She can work the DVD player, the TV, and her fax machine. She even managed to program her VCR to record shows a few times back in the days when programming a VCR was a real accomplishment.

Also, she considered majoring in math in college and became a high school math teacher in her forties, so she's never been averse to abstract thinking. (It also proves she can work a graphing calculator.) She's had Windows in the house since 3.1, and before that we had an Apple IIgs, which had a GUI finder and other mouse-centric apps.

So what's the distinction? To say that people who came to computers late have trouble with stuff is an observation, not an explanation.


Pretty much all of those things require you to memorize some arcane series of steps to accomplish anything. Most likely, she is taking that mental model of how to use electronics and applying it to the computer.

I believe you said that, once given specific instructions to accomplish a specific task on the computer, she can generally remember and repeat those steps in the future. This fits the model of those other devices, where you must memorize some arbitrary button or combination of buttons to make it do what you want. VCRs, TVs, fax machines and digital cameras do not have a "right click" menu attached to them, and these are the devices on which she has formed her mental model of how to use electronic devices.


There's no point in denying the fact they have trouble, though. If a similar group of people keep having trouble with the same interface of a user application, then the usability is just plain bad. It's not the fault of the users, it's the design's fault.

Don't get me wrong, I feel with you, though. I don't want to count my problems explaing the concept to beginners^


Or maybe they are just not motivated enough and like a good excuse for not having to do stuff? "Strategic Incompetence" is an amazing concept :-)


I wouldn't say that, Tichy. If people buy a computer for their homes, they are motivated to learn new things. If it doesn't work out for them, they certainly could manage with more motivation.

But so could anyone achieve a BSc. Everyone with a BSc could have an MSc. All having a MSc should be working on their PHD. You get the idea. If you can go one step, you _could_ always take two.

Sometimes you don't want to go that extra step, because there's not enough ROI. And sometimes you don't need to, because it was a design flaw.


Or you don't need to because you have a son you can ask for help instead...

Maybe it is one of the few excuses to get in touch. So the iPad will lead to more loneliness in the lives of old people, because suddenly they don't know what to ask anymore :-)


Or they could for instance share pictures they took of their beautiful summer garden instantly with their grandchildren - including an invitation for dinner on the weekends.

As a techie, I want to believe that technology enables communication sometimes. Especially for edge cases like older folks.


I'm only available on the phone, and she usually suffers with something for a while before asking me. She's good about that. Also, the questions have become less and less frequent every year as she adds to her repertoire of concrete know-how.


Didn't mean it in a bad way - just some people don't know what they are missing if they don't use computers, so they don't make that much of an effort.


That doesn't explain why, after being shown a task and explaining that the same action enables other, similar tasks (and in this case, saying that right clicking will open a menu and telling the user to simply try it when looking for a new action), someone would continue to not be able to perform such tasks. A handful times is normal, but after a while, "right click opens a menu of available actions" should sink in.

These same people can do much more complex things, theres no reason that they couldn't learn things like right clicking after a while. How hard is it to right click as an automatic "see if it does what i need" response after having been shown and told to do so multiple times?


Same here - it is really just one rule to remember, instant PC expert. That is also why I don't understand everybody raving about Macs. Macs are not consistent with the second mouse button rule (they used to have only one button), therefore there is no way to become an instant expert on a Mac. I always hate using a Mac because it makes me feel so helpless.

My mother learned the second mouse button rule after a while, btw. Don't give up :-)


"That is also why I don't understand everybody raving about Macs. Macs are not consistent with the second mouse button rule (they used to have only one button), therefore there is no way to become an instant expert on a Mac."

Although you may have answered your own question, as it seems that the right-click button does not fit the mental model of non-computer-experts (at least, that is what we have been discussing).

Maybe the Mac works better for those people because it does not assume that the user will just right-click to figure out how to do something?

Right-click does work well, of course, in most contexts on the Mac. Interestingly, though, I've been finding myself appreciating having a big obvious button to click even when I know the functionality I want is also in the right-click menu. I suspect Fitt's law has something to do with it.


Yes. There has traditionally been a drive toward putting all major functions of a Mac app in the menu bar — even if they also additionally got a shortcut such as through a "right-click" contextual menu. This leads one to expect to be able to find any available option through scanning a single set of options (categorized roughly by the thing or type of action they effect) rather than having to mentally union all the options from several disparate places to get a complete mental model of the app's actions and options.


And even if they only have one button, they still have the command+click combo (which, IMHO, is much more complicated than simply having a second dedicated click button).


command+click is horrible, as are the overhyped gestures. There is no way to accidentally discover them.

For several weeks I had to Google whenever I wanted to rename a file on my Mac. I still have to Google for the key combination for showing the desktop (cmd+some function key, but there are 12 function keys).


Just hit enter when you clicked on a file. That's the same F2 does in Windows. Just the enter button is closer to regular use than F2.

As to showing the desktop: FN+F11. But you can configure that in System Preferences -> Exposé and Spaces.

Easy enough. I use it every day. Besides you're a linux guy. You should take a look at the visor terminal (visor.binaryage.com). It enables a Quake2 like terminal via hotkey all the time. That's a great productivity booster!


Wait.. are you trying to tell us its easy by introducing yet more odd and complicated buttons to the mix? Command + Click is too complicated - use Enter or FN+F11 instead? What??? I'm really not buying into your argument here!


I can not remember cmd+f11 (my guess would have been f3). I don't use it often enough. I can remember the little button next to the start menu that looks like the desktop. I can remember to try second mouse button when I want to do something special with a file.

Key combinations are impossible to discover by exploration. With the right mouse button, you can solve every problem by exploration.


To be fair the macs have a second button now but you need to enable it I think. (though it might be enabled out of the box I cannot recall) if you want to use another mouse with more than one button you are certainly able to and it will work as you expect it to. But if you cannot do any of that the context menu is only a ctrl+click away.


People just need to play some Minesweeper and they'll learn about the existence of right click. That's why Minesweeper was created, isn't it?


I have to say that I have watched many people play minesweeper without the right click.

In fact it was a while before I realised that you could leave little flags by right clicking and I still rarely do it today if I'm playing.


So have I, actually. Many people also don't know about clicking with both left and right buttons simultaneously. If you do it on a square that has all of it's flags set, it clicks all other unclicked squares around it.


This is very nearly the only reason I ever used flags. It took me a while to accidentally flag something, then a tiny bit more to discover how to unflag. Fast forward quite a while, and I discovered the auto-sweep, and now use it nearly exclusively (way harder to misclick that way).


In Germany we also had this game called "Moorhuhnjagd".


My personal opinion is that clicking with the mouse is a flawed design. This makes it hard for beginners.

Why would you need to double click to start an application on your desktop while you only need one while you're in your webbrowser?

But even more so, I believe that the concept of files is hard to grasp for a real beginner. Modern applications (like Google Docs, Evernote, ..) do not use them at all. The concept of "content" of a certain application is a lot easier to understand.

Take the iPhone for example, there's no files at all (for the enduser). There's no need to "rename", to "open with another application". Does this infuse aggrevation in developers because they feel drained of power? Yes, of course! But those are devices created to be used, not to configured.


Files are useful if you want to share data between applications. I thought about it just today - maybe they could just be seen as the most general interface (a blob of bytes, basically). I suppose Apple & Co want to replace that by handlers and plugins - want to send a photo by email? Either you need an email plugin for iPhoto, or an iPhoto plugin for your email program (and how does it work if you use a web based email program?).

I don't think this approach will scale well in the long run (there are a lot of file types...). The iPhone is a bad example - I suspect in the long run apps just don't interact enough, which is why they will be superseded by web apps again. They are just good for isolated use cases.


Android's implementation of Intents and its sharing menu are probably a better example. I take a photo with the camera app, then hit share. Android automagically knows that Gmail, Messaging, Picasa Uploader, PicSay Pro or any number of image manipulation apps can do something with that image. Reading a cool story in the NPR app? I hit share and can, again, select Gmail or Twitter or whatever else the system recognizes as being able to handle that data type.

I don't think it's a replacement for filesystems, but it is a very powerful way of handling data on a platform as limited as mobile.


I like the intents concept, too. Unless there are > 100 apps that claim they can do something with your intent.


The thing is, there aren't actually that many file types which you'd want to open in multiple programs. Maybe fifteen or so which you'd expect to do data interchange with - a handful of picture formats, ditto movies, ditto audio, ditto text/DTP, and Excel and Powerpoint.

Everything else is special-purpose, so as long as you can mail the document, you're fine.

iTunes and iPhoto already handle the majority of those. Add a personal office-document-management tool and you're already pretty much there.


Isolated use cases is just what the average user needs. He doesn't want to be distracted by "byte blobs" and "program interaction".

Do one thing and do that well, I say.


The thing is, I am often very confused when I want to do things on a Mac. Maybe it is just because I am used to the old fashioned way of just attaching a file to my email. Like with the right click, files always work the same way. Unless every app comes from Apple (which is probably their goal), not all apps will adhere to my expectations in the same way. And even Apple will probably have a hard time to pull it off.

Some tasks simply are complicated by nature. I don't think all use cases are isolated. For example mailing an image is not an isolated use case (or rather, would it really be less confusing to have different apps for mailing images and mailing text?).


Since when can't you mail images or other files with OSX? Just drag and drop a file into your mail and you're good to go. There's also the "attach" button when writing a new mail.


How do you drag a photo there if it is in iPhoto? Where does iPhoto store it's photos? Is it even legal to just copy fotos out of the iPhoto folder?

I also tend to have problems on OS X because the finder sucks and dragging files to the task bar -> window opens -> drop on window often does not work for me (I suppose it should work?). If you open the finder, it hides the email window...


Just drag the photo from iPhoto to your new mail. Be strong, there's a lot of good magic involved(;

Also, iPhoto stores the pictures pretty conveniently as you might expect: ~/Pictures/iPhoto Library. And of course you can copy the files out of there.

And finally, you might want to get a screen that can fit a new mail and a finder window next to each other. A 13" monitor should do that just fine, btw(;


I identify with you to a great degree: I have to work to imagine how mousing could be hard.

Nevertheless: none of our bafflement changes in the slightest way the fact that there are people who are, indeed, so baffled.

It delights them to use something that doesn't baffle them. And that delights me.


I don't disagree at all, and I'm mystified why everybody assumes that just because I show a little frustration on a message board with other techies, I must be bent on haranguing users into compliance.

I feel a little vindicated by the fact that people have suggested several different explanations here, none of which seem to be universally accepted. Is the mouse unnatural? Are abstractly-minded (with respect to computers) people different from concretely-minded people? Is it a matter of being exposed at a young age? I think there are credible objects to each of these arguments.

For instance, the first suggestion, that the mouse is simply unnatural, doesn't account for variation among users, and it's only a constructive criticism to the extent that it stimulates development of superior alternatives.

The second suggestion is circular, because "abstract-mindedness" in this context doesn't seem to correlate with any other measure of abstract-mindedness. Are mathematicians or physicists naturally better at right-clicking than salesmen or physical therapists? That doesn't seem to be true, at least among folks who are middle-aged and older.

The third suggestion implies that usability problems with well-designed WIMPy interfaces are a historical blip that will naturally disappear unless there's sustained radical innovation in interaction technologies and UI paradigms. I could believe that, but some people strongly believe otherwise.

If a single one of these is responsible for the bulk of the difference between me and my mom, then conclusions can be drawn about the future of user interfaces. If the mouse is unnatural, problems will persist until someone invents a superior replacement. If abstractly-minded people are different from concretely-minded people, a way will have to be devised to cater to both. If it's a matter of early exposure, then there are usability generations washing slowly through dimensions of class, culture, age, and geography, and different usability guidelines apply to different generations.

It's an interesting question to think about, especially since new form factors have created more room for innovation. Right now, less complexity seems to be the wave of the future. It will be interesting if a new form factor emerges to prove that people can comfortably handle more complexity than current technology allows them to.


I realised the web browser (from Netscape onwards) is easy to use mainly because of the mouse cursor changing to a hand when it hovers over a link. Or a text thingy hovering over an input form. Without the mouse changing cursor, you wouldn't know what was what...


Who the hell down-mods honest confusion? If it's so obvious, then you try explaining it.


You sound really condescending. I'm sure she'll eventually get it if you keep telling her.

Also if you don't tell her, she might figure it out on her own, maybe she keeps asking you because you're too available? :D


Just had a family reunion yesterday...I'd say half the people over the age of 55 lack an intuitive sense of computing.

My MIL didn't understand that the interface represented a physical top of a desk. My mom double clicks on web links. My uncle does all his contractor company accounting with a pencil, in a ledger book.

That is a huge market for the iPad.

The desktop was a wonderful metaphor with its files and folders, for what it was replacing. But its time is coming to an end and the iPad is riding and creating that wave.

After the iPad 3G drops below $300, it will do more to bring that demographic online than any other device to date.


I've been thinking about getting an iPad for my father. He has always been interested in computers, but a keyboard and mouse are just a bit much for him. I got him going on a TomTom GPS with touch screen, which, after a shaky start and fat fingers, he got going on and is confident with. I've recently set them up with a Tivo, and again, after a shaky start, is building confidence with the menu system.

I've talked to him about an iPad, told him it is touch screen just like his TomTom. He understands a touch screen interface - you point at something and you get it.

Those wondering what the big deal with a mouse, keyboard and everything else - older people have less flexible hands, someone who has worked hard all their life don't have the fine motor skils to move a mouse around and click and double click. A keyboad is an abstraction away from the 'thing' - which is the screen. Old people also have bad eyesight and often wear bifocal lenses for reading, and looking up and down constantly between a keyboard and a screen is taxing and frustrating. If you think a keyboard is simple, ask 10 non-tech friends what the 'scroll lock' key does, and see if you get 10 answers that match.

Traditional computer devices haven't moved on much from the hobbyist days of the Apple 1 where you plugged the box into a TV screen. The iPad is a genuine step forwards in design for people who don't want to learn a complicated control interface. At the same time old people know the future is online, and their children and grandchildren work, play and communicate online. They know they are missing out, but it's hard for them to undertake the near-vertical learning curve of learning how to start, work and run a traditional personal computer.

I'll take my Dad into the store and give him a play with one, see what he thinks. I'm pretty sure he'll be nervous and anxious not to look silly but will gain confidence quickly.


That is exactly our story. My grandfather also has very strong hands due to having worked a lifetime. But clicking (especially double clicking) never came easy to him. His Navi did, however (after a stumbling start as you mentioned).

I did show my grandfather the iPad ads on my Macbook and he instantly liked the idea. I'm sure yours will, too. I would be glad to hear about his impressions when the time comes^


Well it's my Dad, not my Grandfather, but, well, he is a grandfather and he is in his '70s. ;)

The guy can still drive a 3 inch nail in with 3 hits of his hammer but has real trouble with the end of his finger, of which the last 1/4 inch is missing, courtesy of a saw blade. He also has trouble grasping a mouse due to tendon damage in his right hand caused by a nylon rope, a flooded creek and a rescue attempt - well, you get the picture. Old guys who have worked hard have the money to spend, and the interest, but just aren't a use-case for product designers. I've long argued that a phone company who brings out a simple, affordable phone with a long-lasting battery, large buttons, easy to read simple screen and ear-shatteringly loud ringers would clean up the seniors market.


Kudos to your Dad, then! I'm certain he will manage, he seems just too pro-active to let this new chance go^^


My granny learned to use the mouse, it is not that hard. Yes, her first movements with the mouse were awkward. That doesn't prove that it is impossible to master. A lot of things are awkward when you do them the first time.

Is using a browser on a normal PC that much harder than on an iPad? Or is it simply that people don't try to do many of the things on an iPad that are confuding on a PC to begin with? For example, most problems my parents used to have were related to scanning and faxing, which I tend to avoid on my own computer.

How is the browser on an iPad easier to use than a normal browser?


I congratulate you to your cunning grandmother! It's good to hear that she keeps up.

Anyway, "something not impossible to master" might not be necessarily optimal. In your scenario of browser usage the mouse adds an unneeded layour of complexity in terms of usability.

Whilst human beings are normally used to interact with their hands directly, sometimes they were apt enough to design tools when their hands just weren't enough. With hands you can't make a fire. With hands you can't dig up a field.

What you can do with your hands is push a button or a link. There's no need for an abstracted tool. The mouse provides nothing a single touch wouldn't. The mouse just provides clutter.

Still, I'm glad for your grandmother to have mastered it though. Whatever works to make people happy. Personally, I also have a mouse on my workstation, but I'm an edge case^


I realize it is silly to dispute somebody else's experience. If the iPad worked for that grandfather, and the mouse didn't, what can I say. It just seems to me the "revolutionariness" of the iPad tends to be blown out of proportions. I'd guess that a lot of older people could have a problem with the tiny screen, for example.

Supporting a granny on Windows and a mother on Linux, I have certainly seen a fair amount of unintuitive and confusing user interfaces. I don't want to defend all the design decisions in desktop computers (fwiw, I think Ubuntu provides by far the best experience for older people who need support from their younger relatives, but that is for another subject).

Of course just cropping away functionality until only "unconfusing" stuff is left is one way to approach the problem. I don't think it is the best solution for everybody, though.


Certainly cropping away needed functionality is not a very decent approach. But let me start with your comment.

Since it is my grandfather this article is about, I can assure you that he had a pretty nice LCD screen on his old PC. It was only a 17", but it was a Samsung SyncMaster providing good quality. Besides, my grandfathers eyes are probably better than my own. And I'm not kidding here - I don't need glasses, but with his glasses on he kills my eyesight whereever we go.

So it wasn't the monitor, it definitively was the mouse. But not only that. It was a desktop PC. There's no way comparing the clumsy process of reading a mail on a desktop PC to that on an iPad.

PC: Step 1. You don't know you got a mail, you poll them. In this case we're not talking about tech guys who have an always running computer. Step 2. Go to your computer. Step 3. Start monitor, start PC, (start modem) Step 4. Wait. Step 5. Login Step 6. Connect to the internet. Step 7. Start your favourite mail application. Step 8. Wait. Step 9. Probably you got mail last week. Go ahead and read it.

iPad Step 1. Your iPad is laying on the couch table. It beeps. Step 2. You take the iPad into your hands and unlock it with one click and one sweep. Step 3. Mail.app tells you there is one message. You click on it. Step 4. Read the message.

There was a lot less walking and using potential keyboards and mice involved with the iPad. I personally like that experience a lot better for "non tech folk". Plus I like it on my phone. A lot.

I hear you when you talk about Ubuntu. I personally look after several installations of friends (sometimes girls). All of them are very happy with their computers. No viral problems, no bluescreens, no constant reboots. If they need something new I can install it remotely or talk them through on the phone.

Yet, all these users are "powerusers". They need Office. They need a printer.

Many people don't. That enables them to a lesser complex toolset. The iPad provides that. It could have been a good Android tablet. I'm a big fan of Linux and have been a SUN employee back in the day. But honestly, there is no good Android tablet out there to compete with the iPad right now.


I don't understand how the iPad magically connects to the internet, while for the PC it is a complicated challenge. I suppose you have a WLAN router now, whereas at PC times, you didn't?

No question, the always on nature of the iPad makes it more convenient for some tasks. Although I have to say, I never switch off my MacBook either, it always only goes to sleep (before you ask, I bought a MacBook because I thought I would do iPhone dev... Still a Linux man at heart).

I understand that the iPad can be simpler for some taks, I just don't understand how the PC/mouse could have prevented computer use completely.

Eyesight: your grandfather seems to be lucky, I was talking more in general, as the iPad is hyped as "the computer even old people can use". Most old people are less lucky with their eyes.

Both my parents do mostly stuff with the PC the iPad can't even do, for example skyping with their granddaughters. My mother is also still big on scanning, although I wish she would stop :-)


That's great to hear, Tichy! I'm glad for your parents to do all those things!

To your comments: The iPad is always connected, because it runs on a data plan (since you're from Germany: we're on 10€/month O2). Before my grandfather had to connect his modem after disconnecting his phone. Again, you're from southern Germany, I'm sure you know a lot of smallish villages without DSL, too. Getting that Ubuntu PC online takes me about 5-7 minutes. Getting the iPad online the better part of a second.

And my Grandfather can use Skype, too. Ever since I started living in Switzerland (two years ago), I'm doing all of my calls using Skype with a landline plan. First with my Macbook, but since one year with my iPhone. I carry my landline with me, all the time. And so does my grandfather.

What the iPad doesn't do is scan, yes. But my grandfather has a fax capable of copying. And he own's a digital camera. That fixes about all use cases he has.


Again, if your grandfather is happy with the iPad, who am I to argue with it. I just don't understand why you compare it to outdated technology (this also always irks me about Apple). At the moment, UMTS routers seem to be all the rage, so you could have attached that "old PC" to a router, too. Of course having to manually change the phone connector is unbearable - I last did that maybe 12 years ago... Even twelve years ago there where cheap boxes available that would switch the phone line automatically between modem and phone.

I am not very knowledgeable about skype. I thought since the iPad doesn't have a camera, skype would be impossible (at least skype with a picture, which is a hit with the granddaughters). Also I wasn't sure, can you attach a digital camera and get images off it onto the iPad?

Anyway, I suppose it is all great - until you get those flash christmass cards...


There will be no video chat on the iPad. But honestly, I don't even do that with my girlfriend. I call her with my iPhone, not the Macbook. Just because it's more convenient to walk around while calling.

And there's an iPad camera connector kit. This was a must-have feature for us, because my grandfather really likes his Sony Cybershot digital camera.

As for the christmas cards, I will certainly not send him a flash card. I wouldn't want or open one myself. There's little to no chance that anyone will send him something blinky or flashy. Maybe a nice picture, though^^ Yeah, that'll work.


You comparison is unfair. Why can't you just leave the computer on, connected to the internet and logged in to your email? Then all that's left is walking over and taking a look. You might have to walk to fetch your iPad too...


Because the average person wouldn't want a 300W computer + 100W monitor running all the time. Besides, if you're still on a modem, you probably want your telephone to not be occupied all the time.

It keeps the bills down and let's face it: There's no usecase for an average person to have a computer running all the time. It's like having your TV and radio running all the time. You could do it, but you better shouldn't for various reasons.


Monitors have power buttons and desktops have little reason to use anywhere even close to 300W when idle - or do you leave 3D games running when you're computer is not in use? Anyway, since we're talking about your grandfather here, his computer really doesn't need to be all that powerful, to theres plenty of opportunities to keep power usage way way down.

Having said that, I completely understand your argument and agree that an iPad which is always connected on a 3g data plan, can be carried around and beeps when messages arrive really is easier and more old-person friendly.


Because the average person wouldn't want a 300W computer + 100W monitor running all the time.

Sleep mode actually works on PCs these days. There's rarely any need to shut them down completely.


Yeah, sleep mode works. But you won't be online. There's no notification. You still have to pull your information manually.

Plus if you're on a modem there's a lot of manual work involved in getting online whilst with 3G there is just no being offline.


Also modern computers don't consume 300W anymore. Get a notebook or a netbook.


I just had to do an Apache redirect from my wordpress installation to posterous. Over 2400 hits in like 70 minutes. Thanks for the interest!


I wonder what my future grandson can teach me (50 years later).


If everything goes well, he won't have to teach you much at all. That's the exciting thing about the direction the iPad is bringing computing; it's very capable, but at the same time, there's not that much to learn.



The classic legal pad, not a bad choice. I personally prefer the Moleskine, though(;


very cool.


Let me make a bold statement:

The touchscreen will obsolete the keyboard within 5 years.


No, the touch screen will obviate the mouse.

Other technology, possibly improved speech or hand writing recognition will have an impact on keyboards. But then, keyboards have always sucked as non-text entry devices. They were simply (wait for it) _handy_...

Edit: Now I could easily believe that near future systems could drastically reduce our need or desire for text entry. But it's hard to skim video or audio as effectively as text. And of course, text has its own texture and voice.

Our grandchildren may feel differently...


> Edit: Now I could easily believe that near future systems could drastically reduce our need or desire for text entry.

That's the only way to really get rid of the need for keyboards- that or literally flawless speech recognition with natural language parsing for commands.

As long as we need to enter text, and don't have voice input, keyboards are the easiest way to enter it quickly/for sustained periods. I dread the day I have to compose a paper on a touchscreen keyboard, unless it is full-sized. And even then, there's no tactile feedback.

Essentially what I am saying is, until either text input is unnecessary or voice recognition takes off, the keyboard paradigm is necessary and useful. Heck, why do you think phones with full physical QWERTY keyboards are still popular among people who really type on their phones?


literally flawless speech recognition with natural language parsing for commands

No way. I can type much, much faster than I can speak. Simpler, strict non-natural commands are much easier than natural language commands (hell, people have difficulties giving and understanding verbal commands all the time) - never mind the fact that languages like English are terribly ambiguous and peoples everyday speech is too filled with local slang.

Also, if I have to talk to my computer, I a) cannot use it in a group environment or, say, when someone is sleeping and b) when I type I can see what I typed and I can edit it if I make mistakes. If I speak commands, I cannot do this. An exaggerated (but still realistic) example: if I type "delete foo" instead of "delete bar" and notice before I hit enter, I can fix the mistake. If I speak "delete foo" then foo will probably be gone before I can correct myself. Finally, what about different words that sound the same? Sometimes spelling or capitalization is the only thing that distinguishes things (eg, file names) - with speech you couldn't do this.

In summary, in my opinion, speech recognition is a terrible form of input for general text input and command input tasks. (It may be useful to augment more traditional input and it can certainly be more than useful (or even required) for disabled people, but for the majority of people and tasks, I don't see the attraction)


It's not that I like speech recognition, it's just the only other option that is even close to a keyboard that I can think of besides a direct brain to machine link.


Sure. I'm just saying that I can't see it ever replacing the keyboard, while your comment kinda sounds like thats what you were suggesting.


I sort of thing that if the keyboard was going to disappear completely any time soon (~10 years), it would've disappeared already. We have a lot of different technologies for text input these days, but I don't feel like any of them would be better than a keyboard for working at a desk, even if they worked perfectly.

-Touchscreens can eliminate the need for typing to interact with a computer, but people will always want to communicate with each other in writing. On a touchscreen only device, this means virtual keyboards. Those are a convenient tradeoff for portable devices, but they're not ideal, and when you have space for as much screen as you need and a keyboard, I can't think of a good reason to get rid of physical keys.

-100% accurate voice recognition would be nice for some things, but I think I'd still be too slow, tiring, disruptive, or not private enough for most uses.

-Silent speech recognition (http://en.wikipedia.org/wiki/Silent_speech_interface) is a really interesting option, I think. A device good enough at measuring tiny 'subvocal' muscle movements could seem very similar to mind reading, while also being less intimidating and possibly less invasive. NASA did some interesting work on this (http://www.nasa.gov/home/hqnews/2004/mar/HQ_04093_subvocal_s...).

-I think true mind reading devices also have potential in the long term, but I'd be highly surprised if we had anything practical in less than a decade.


Err... I have a laptop with a touchscreen, so I've played around with touch input a lot. A few things you may not realize without some experience with a touchscreen on a real computer:

The screen is most visible a foot or two away from your face, vertical. Make it flat and things get hard.

The hands like typing on a horizontal surface. Try typing on a vertical screen, especially one at the proper viewing distance, and you will type really slow, and your arms will fatigue fast. It looks cool in movies, but it is a ridiculous waste of movement unless you barely interact with the machine.

Portable devices work alright with moving the keyboard onto the screen, and they have no other choice, but for a real computer it makes no sense. If/when the standard desktop/laptop computer becomes obsolete, this becomes irrelevant, but until then...


Relevant: http://okcancel.com/strips/okcancel20031003.gif

Touch interfaces work well for small, handheld devices. They break down badly for anything larger.


The touchscreen will obsolete the keyboard/mouse combination except for entering or editing large amounts of text within ten years. (Obsolete means something like “cross the 50 percent mark” and “displays strong growth while keyboard/mouse devices collapse”.) The iPad seems to show the way: ultra portable with a (probably wireless) keyboard attachment for when you really need to type.

That would be a prediction I would be comfortable with.


I disagree for two reasons:

1. Until they provide button-like tactile feedback, typing will simply not be close to typing on a keyboard.

2. Having your fingers/hands obscure the screen is.. less than ideal. This is one reason why I like the touch-surface + screen combo idea from the 10gui concept.

As an aside, the touchscreen on the ipad, while decent, is IMHO still not responsive and accurate enough for me to be comfortable doing any serious amounts of typing on it.


Too bold.


My personal VIM preference would certainly suffer.

Anyway, the touch screen is undoubtedly gaining use cases in mobile computing and terminals of banks, subways and similar.


6 years then? :D

Let me qualify this a little. I don't mean no keyboards will exist within 6 years, but at around the 6th year mark, the vast majority of tech aware people, like you and me, will refuse to buy a computing device (including desktop PCs) whose primary mode of interaction is through the keyboard/mouse. Rather they will only be interested in touchscreen devices.


I've only tried typing on an iPad a few times, and while it worked reasonably well, I can't imagine doing it for hours a day at 80+ words per minute.


Most people, at most times, have no need to type at such a high rate.

The computer-as-leisure device is mostly used for small pieces of communication - comments on sites like this.

I would be even bolder - the 'average' leisure user could mostly get by with three buttons:

':-)', ':-(' and 'LOL'.


But did you try reading and browsing with it for hours a day? I never had such a great reading experience!(;

I don't think it was meant to substitue for writing long texts. However, you can buy the iPad keyboard dock to use the standard Apple keyboard layout. And honestly, after having burned through so many keyboards in my life - these Apple keyboards really get me exited every time I use them!


I agree on the iPad, but I'm sure there will be better typing experiences coming soon. Tactile feedback and flexible touchscreens are two things that I'm sure will play a big part.


I disagree. I don't see much innovation happening in desktop hardware and software in the next 10 years. The keyboard and mouse will still be around; it will be how we define desktop software. There will continue to be new versions of OS X, but there will be no "Mac OS X Touch Screen Edition".

Computing appliances like the iPad will play increasingly important roles in our lives and will completely replace the desktop for many people, but it won't kill the desktop anymore than the GUI killed the command-line. Instead, the Desktop will simply stop being a major source of software innovation.


Have you seen Swype yet? You might like it(;

http://gizmodo.com/5411779/swype-vs-qwerty-fight


At some point it will. Maybe 5, maybe 8, but within a reasonable amount of time. I think it depends on the acceleration of other iPad like devices in the same way Android phones followed soon thereafter the iPhone. Funny fact: I was just recording an iPad demo and I had to keep redoing the voiceover due to the use of the word click instead of touch.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: