Once I dropped acid and smoked weed simultaneously, which created an uber intense trip. I used my smartphone, and the UI controls seemed to manifest as physical buttons. I imagined that when I felt a 2d button, it popped out of the phone and felt like a real button. That extra feedback completely changed my user experience. There was no more 'looking under glass'. I can objectively say that physical feedback really enhances the touchscreen experience.
> I can objectively say that physical feedback really enhances the touchscreen experience.
Well, no. You can subjectively say that using hallucinogens enhances your touchscreen experience.
Edit: I'll elaborate a bit as I'm not trying to be snarky. Given that you took mind altering substances, it's impossible to separate your memory of the perception of the feedback from the chemical experience you had. However real something seemed - that something being both the haptic feedback in the UI, and your memory of the 'goodness' of it - it's the result of a hallucination. You may remember the UI being really fantastic, but that's because that's what you hallucinated. It's not that different from, say, a pot smoker's paranoia: the scratching outside is the wind blowing leaves against the house, not a SWAT team propping ladders up. Doesn't change what you imagined.
And yet, imagination is what haptic feedback and whatnot is all about. Phones vibrate in a way that sorta feels like you're pushing a physical button, which has the same effect in the brain.
I'm seeing a market here. Smartphones that emit a cloud of weed/acid to enhance the experience.
This really comes down to what we want to do as species. If we want to end up like the fat people cruising on hover pads looking at a tablet all day than this "sliding" future is almost sure to happen. (btw, people are lazy).
The major problem is that future work will involve a lot of looking and manipulating things on the screen (in the developed world this is already happening). This makes your body useless, it makes it weak and prone to illness. We were not designed or we didn't evolve (choose your pick) to sit all day in front of screens.
You have a poor understanding of natural selection. All you have to do is select for survival those that are apt at looking at and manipulating things on screens, and kill everyone else so they can't reproduce. In a few thousands years, we would be evolved for it.
Also, working hard in the fields is also bad for our health, hence our life expectancies used to be a bit lower; looking and manipulating things would be an improvement from that...we can always hit the gym after our 4 hour workday touching things.
As a species, we really just want to create what will effectively replace us. Given a few more (i.e. 20-40) years of deep learning technology development, we won't even have to look and manipulate anymore. By that time, we hit singularity and hopefully we aren't being exterminated by skynet drones.
Who is selecting who for survival? Also, working hard in the fields is not bad for your health. Sitting on your ass for 8 hours is bad for your health - maybe you can compensate by going to the gym, but most don't... anyways, the point i was trying to make is that the future will be shaped the way we want it to be shaped.
Nobody is "selecting." His point is that if it is highly advantageous to "look at and manipulate things on screens," within the population genes that makes individuals better-adapted for this behaviour will be favourable/selected for. It's just Biology 101.
They'll only be selected for if it affects their reproductive success, no? The problems we're discussing usually only affect people later in life, after they've conceived, so I don't see how could it have any significant impact in the natural selection process.
This essay gave me entirely different perspective of things. Great essay! Here's my two cents.
If we were to have a ranking of senses, it is an unsaid fact ever since the beginning, the sense of vision takes the highest preference and touch or feel still remains at the bottom. So, one could say that Technology has evolved along these lines all these years. It might even be possible that technology will not be centered around the things that our hands can feel and it's power to manipulate things in the future unless we change our preferences.
Hmm, I don't agree with your ranking. Each sense has a range of tasks that it covers, and although there's some overlap, each sense covers things that simply can't be replaced by the others. Therefore they are all dependent.
Sight allows highly detailed sensing of the nearby environment.
Smell gives a longer range detection, plus sensitivity to chemical differences which sight lacks.
Taste gives a more specific chemical test to things we're about to eat.
Touch provides some sensory information about the environment, but mostly it's a control for our manipulation of the world. It facilitates tool-building.
Hearing facilitates high-bandwidth information transfer (language) as well as covering some mid-range distance sensing duties for cases where our smell and sight are lacking.
I think ranking these is futile, since they all cover such manifestly different use-cases.
Hearing is high-abstraction, not high-bandwidth (that's why you can transfer voice with so little bandwidth over wires and waves). Other than that, your analysis is spot-on.
You're missing proprioception, often deemed as a sixth sense. Tools like Kinect can be used to convey commands - postural and gestural input are not really dependent on touch, and can nevertheless used for input (and output?).
Slide and drag is hitting the technological useability sweet spot, where the mouse was at last 'generation'.
Haptic (force feedback), 3D mouse, a variety of trackballs, are all available, all suffer from missing the spot, either due to expense, unneeded complexity, or just disinterest. Trackballs are an example, the one ongoing use I've seen is for film color correction workstations, but are used for a specific mode in a color wheel where it's natural to nudge on a few axis simultaneously.
Phones also have vibration feedback, I've rarely seen it used, and dare I say, never well (love to seen an example of an app that uses vibration frequently and effectively).
Slide areas on a screen are effective when the domain of attack is narrowed to the standard 3-7 options. Mouse interaction is best with a narrowed selection of areas/icons/corners, love those 4 pixel boxes. And they are both cost effective. As the tech to 'understand' the users domain becomes more sophisticated and 'early' in the interaction cycle, gesture interaction and eye tracking will be more effective, the participant will begin to feel the system knows their needs.
As much as I love thumbing through a book and can imagine a dance interface, the future is probably less and less pointing and more just single tap to “go ahead, just what I want” than elaborate touchable interaction.
This essay is what leads me to think about the possibilities of user interfaces that are comprised of real world objects which become "smart" via the projection of augmented reality. For example, your average tennis ball could become a slider, a knob, or even a virtual storage container with a heads up display like that produced by meta.
I wonder if the future of user interfaces are simple but universal real world controls (similar to a meatspace UI toolkit) combined with AR. With AR any surface is a display, and when you consider the fact that the contemporary "pictures under glass" model of UI fundamentally falls out of the limitations of current day display technology (namely, displays are a type of surface, but not all surfaces are displays) then it kind of seems logical that if most flat surfaces become displays (virtual or otherwise) then the space of ideas around user interfaces loses a large coupling and fundamentally new things should be possible.
I see it differently though. Touchscreens are interfaces to something so fundamentally abstract (data presentation/manipulation) that it's hard for it to be less ethereal than images that change on a display. Touchscreens just factored away the mouse/keyboard.
The next information-driven interfaces will, most likely, be something that you don't even interact with your hands (retina image? voice recognition? brain-computer interfaces?) than some sort of screen with tactile feedback. Physical interaction will be limited to tools which map to tasks in the physical realm.
I am also an interaction designer and while I can certainly understand the author's concerns, that we have abstracted our physical world into visual, often 2D, data which we can manipulate with minimal effort is the direction "nature" is taking.
We are deciding as a species that what makes us human is mind and our tools will reflect that. To think that we will build specific tools that simulate an archaic contextual experience to preserve an unnecessary tradition just screams waste to me.
If by an 'archaic contextual experience' you mean real world objects and by 'unnecessary tradition' - body movements, then I don't know what kind of future you want to live in.
I have in my pocket a wad of clay. I flatten it into a pancake and its face lights up with the book I was reading on the subway. Roll it into a ball, thin it out, maybe a special twist and it forms itself into a screwdriver. Another squeeze and it becomes my house key.
Its the reason why I prefer tactile keyboards and gaming mice over these finger pads that apple seems to be trying to force on everyone. You get a more personal feel, rather than a numb rubbing sensation.
Interesting, I've come across this article before, but now owing to events in my life my perspective on it is completely different.
Up until 2011 I was basically a upper-middle grade computer user. I owned a Macbook and used the trackpad heavily. The responsive gestures and seamless software experience were wonderful compared to Windows Vista. I knew a few keyboard shortcuts, and I'd jigged up a program so I could move between tabs on Firefox with finger swipes (among a few other gesture tricks) which I thought was very clever, but my computer skills didn't go much further - compared to the average user I was skilled though.
Then I injured my hand, and ever since I haven't been able to use a trackpad without my hand stiffening up and getting sore, which basically killed the entire experience. For a while I struggled along with my phone and iPad, until I wound up with some sort of nerve damage in my thumbs from a combination of rapid tapping on hard glass and haptic buzzing on my phone's softkeys. Nasty. Other ergonomic problems have mixed in with that - it amazes me that I could use desktop computers so effortlessly when I was a kid, now I get a stiff neck and my arms hurt from resting on the desk (working on getting a better desk...)
It's been an interesting experience, if at times extremely frustrating (as my appetite to consume information has only grown all throughout despite my deteriorating ability to use the most common computing forms.) Now I look around me and see what looks like a looming ergonomic disaster. I cringe now when I see people tapping away on hard, flat glass surfaces (not to mention all the hunched backs and pitched necks from laptop use - pretty breathtaking once you start looking for it). I wonder how widespread the ergonomic problems from hard/flat surface interactions are - seems like the kind of thing that could be simmering away behind the scenes (given that the worse you have this sort of problem, the harder it gets to remain socially active online, so it effectively silences you.) It seems like a bit of a perfect storm - there's just such a huge gap between the wider public and the media that serves them on the one hand, and people like HN users who actually know and are passionate about computers and usability on the other.
Me, I've been using Vim extensions and trying to make my laptop more keyboard-centric. Until eye-tracking and neural interfaces become available, the keyboard cannot be beaten for direct brain-to-action speed and control. My dream would be a computer environment that combines the beauty of modern touch and web interfaces with a unified universal modal keyboard control system, which reached into the browser. It would be nice if there was some standardisation of keyboard access for websites as well - Vim extensions are nice but limited, hard to fly with something that can't handle a Youtube video...
Anyway. I like to point out the difference between Apple's Magic Mouse and an ergonomic tool like the Handshoemouse. One looks beautiful but is a piece of shit interaction wise. The other looks bizarre but is actually built for a human hand. Unfortunately we're living in a world where the money is in serving the masses, people who for the most part will never understand how fast and powerful computers could be if they are moulded to suit our bodies, and so who think Apple products are perfect. Hopefully though as more people come up on these mass-market products and develop a taste for computing the demand will start to build for a higher grade of computer tools.
I don't know what he meant, but Google Glass is definitely not replacing hands with eyes. I touch mine regularly to scroll through and select menus. The other option is voice commands.
Once I dropped acid and smoked weed simultaneously, which created an uber intense trip. I used my smartphone, and the UI controls seemed to manifest as physical buttons. I imagined that when I felt a 2d button, it popped out of the phone and felt like a real button. That extra feedback completely changed my user experience. There was no more 'looking under glass'. I can objectively say that physical feedback really enhances the touchscreen experience.