Hacker News new | past | comments | ask | show | jobs | submit login
The Post-Mac Interface (medium.com/twomonthsoff)
129 points by louis-paul on Aug 3, 2015 | hide | past | favorite | 57 comments



"Most of us don’t even need all the computing power in our pocket. We’ve gotten to a place of sufficiency, when just about any computer or smartphone is good enough for what we want them to do."

This is a fairly timeless sentiment. Even more ubiquitous is looking back and laughing at just how primitive formerly-sufficient technology was. You'd think we'd know by now to just not have this thought anymore!


I would say that a lot of the time software just keeps getting slower as developers care less about efficiency.

I'm not talking about stuff you can't get to run faster. If people had the same kind of care they had in the 80s and 90s, the software would be incredibly fast. But these days they can be more careless.

Sometimes it's a good thing, as things ship faster and frameworks are more expressive. Other times, though, it's a PITA. Like when half the time we upgrade just to run the ton of bloatware from different vendors on the web / multitasking OSes at a decent speed. Chrome is usually the biggest hog on my machine. Or like when Apple's iOS keeps making the iPhone slower with every version, until you're practically forced to upgrade. Is it really that hard to make an OS which hardly does any major multitasking from getting slower and slower with every release? One almost wonders if they do it on purpose ;-)


Perhaps I'm misunderstanding you, but what you're saying doesn't match up with my experience. Pretty much any computer built in the last decade has been fast enough to do everything I need without making me wait. Screen resolution has improved somewhat, lately, but new machines feel just like old ones when it comes to performance, and that definitely was not the case from 1985-2005.


Computers have reached the same state as cars and household machines, nobody needs a 800hp car, Having 200+ is nice and fun but most people can probably get by with 150 for most daily use. A model from 10 years ago will fulfill the same needs as one from today, the new one might be a bit more efficient, quiet and comfortable but in essence they both just take you from point A to point B in the same time. </inevitable car analogy>


In point of fact I'm using a computer from ~2002ish, and it works more or less fine. The biggest issue is websites that pull in like 1000 external resources and constantly run scripts.


Yeah, that's been my experience too. The NoScript plugin helps; I'm generally not interested in dealing with complicated websites full of scripts anyway, and prefer to opt-in rather than leave everything running by default.


A machine from 10 years ago will run software developed 10 years ago with perfectly adequate performance -- probably true even prior to 2005 as well as any reasonable point in the near future. But the bit I quoted is breaking out of that completely, basically claiming that we're at some sort of hockey stick curve where that software-hardware vintage parity won't be required anymore.


Well, yes, that's exactly what I was saying: we reached that point on the curve somewhere close to a decade ago, and old machines are perfectly capable of doing everything I need them to do now, today, with current software.

The changes which have actually made a difference to my computing experience over the last decade were the arrival of affordable SSDs and a jump upward in LCD pixel density.


I certainly agree that we can do most of the same things if our needs don't change, but usually our needs do change. Even if the software isn't simply more bloated than it used to be, we usually have new needs: say, creating or consuming something in a crazy new format. I guess we've pretty much maximized the number of pixels that fit on a hand/pocket-sized screen, but if filling our entire living room wall with 16K resolution video is the new norm someday, and we're showing off a video we just took using our pocket/wrist device, we need serious horsepower. This example is hyperbole but the point is, for the most part, I find it hard to predict that we won't find a way to use serious increases in processing whenever it's available.


Well, sure we CAN use it, but what he's saying is that we no longer notice. 15 years ago, ads for new computers and components had kids with their hair slicked back because of how BLAZING FAST they were going, because that's how it felt when you upgraded. Now, it's more like 'oh hey, it doesnt run like a slug anymore because this device is brand new'. People don't get new laptops because they need a faster device nowadays, they do it because they actually wore down the old one. That meme that computers are obsolete within 6 months of purchase is no longer valid, they're good pretty much until the physical components break down. Most people won't replace their smartphone until the screen, charging port, or digitizer breaks (barring the very real bleeding edge crowd that holds less ground these days).

Everything is just...adequate. People don't log on to an older machine and lament how slow it is anymore, they don't balk at how huge the new hard drives are, and consumer-grade disk speeds havent gone up since 7200rpms were introduced. Video hardware is pretty much the only technology in most computers that still has reason to march on, because game developers can always add higher textures and cooler shaders.


Consumer grade disk speeds have gone up a ton since 7200rpm disks with the introduction of SSD which makes a huge difference.

Furthermore, while people aren't really after bigger screens, getting to a point where a laptop or desktop has similar DPI to a smartphone would require vastly higher resolutions for which there isn't the technology yet.


Until we gladly accept brute force algorithms as usable solutions, I don't think I can agree with the sentiment.

Almost nothing I want to do on computers runs fast enough for me, and I'm too stupid to invent good algorithms. But I haven't given up on solving hard problems, so naturally I will look around and see machines incapable of even getting started.


My laptop has a 2.7GHz possessor. When I need to conserve power, I lock in down to 0.8GHz, sometimes I forget to unlock it. Most of the time I don't even notice the difference.

Maybe, at some point, we will come up with a use-case that needs more compute that what we currently have, but for most people's current usecase, compute is not the limiting factor.


Cycles are very much the limiting factor for any kind of creative computing.

I could literally eat a 1000X speed increase without blinking.

I think everyone else could too. The difference isn't about speed, but about quality of interaction.

Today's minimal UIs are a good impedance match for the limited cycles available. iOS was particularly good at leading the way on this, creating a UI that was relatively undemanding of raw cycles but still created an unexpectedly immersive experience.

A big part of that was the move from type-and-point to drag-and-tap. The tactile mode, the skeuomorphism, and the portability and instant access compensated for having to squeeze the UI into a tiny screen.

It felt like a more responsive experience even though it was quite a bit slower and more rationed than equivalent interaction on a full-sized desktop.

That's still true today. You can get stuff done on a tablet, but the most efficient and productive UIs pre-ration the affordances. A desktop is much more flexible/general and gives you much more visual bandwidth.

If you apply this idea to hypothetical 1000X hardware, you won't get more of the same but faster - you'll get completely new modes of interaction, which will likely combine the immediacy of touch with the high bandwidth of desktop interaction, and take them much further than anything available today.

It's possible to imagine all kinds of science fiction experiences. The reality will probably still be a surprise.

Even so - the point is really that today's users feel like they're doing everything they want, because UIs are designed without affordances that would be slow and frustrating given today's hardware.

That will change with faster hardware and new designs, and phone UIs will seem as quaint as a 720 x 350 monochrome desktop PC screen running DOS.


I used to do that too. The most limiting factor for "mainstream computing" is crapware / ads / shifting UX. That's always the reason why average joe neighbors ends up buying a new machine. Random popups slowing down and frustration over 4-5 years adds up to enough for someone to believe that shelling 500$ out again on new shiny will finally bring them the land of click abundance they dream of.

Very few hard requirements are valuable today for most users: battery lifetime, hd video for skype/facetime. And maybe SSD, which are now large and cheap enough for people to switch, and get back the bliss of instantaneous reaction again. Other than that no important task will require more than a core 2 duo. They won't saturate Excel (which was not long ago still single core only). In the case they really do, that spreadsheet is probably worth enough to buy a full fledge MBP.


Don't mean to derail the conversation, but, -wha?!?!? That's amazing, I had no idea you could do that. How do you do that?



Have you bought a computer for someone in the last five years? I am not talking about power users, I am talking the friend or relative that looks to you for technical guidance. Almost every time I have heard the same thing, "This is so much faster/better than my old one". Sure, a large portion of that is due to less crapware and software improvements but it is a noticeable difference to most users and they appreciate the additional processing capacity (although they may not know that that is a factor in the improvement).

Computing isn't a limiting factor in almost anything we do (I occasionally code and test on a raspberrypi) but we do notice and appreciate the improvements that performance bring. Even naive users notice.


My laptop does this when battery is low and I can feel it when interacting with UI in browser - latency is notably increased.


While there is always power in this sort of "yeah, we've always said that and always been wrong!" argument, I still feel that the benefits of computing "power" have slowed to a crawl recently. Which is not to say that computing devices don't improve, they just do so differently; they are better designed, lighter, more durable, have better battery life, etc. rather than running noticeably faster (in fact, we are usually happy to trade "power" away for improvement to all those other things, especially battery life).


This only holds up in the absence of things like Wirth's law and its variants. Not enough people are incentivized to be as efficient (in terms of CPU, etc.) as possible. Device makers are, because it affects their battery life (marketing) numbers, but only in closed systems (like iOS) could we even begin to assume that device makers have much influence over whether the downstream software development is sufficiently resource-efficient.


For everyone commenting to counter your point:

Try to encode high-resolution 10bit RGB 444 video and stream it to the internet. While playing a AAA game on the same machine. Try to get the latency as low as possible and make sure the video is smooth, without artifacts on the stream.

This is the kind of problem a lot of folks are trying to solve right now and the current hardware is barely good enough to keep up...and that's with adding highly specialized add-on cards.

Try emulating Sega ST-V games on your machine -- and realize quickly that accurate emulation is a matter of clock speed over most other factors...and which has mostly slowed down since the Intel C2Q era.

When you provide compute power/bandwidth/whatever, someone will find a need for it.


I don't think anyone here is arguing that no one has a need for more power, but all of what you are describing is a niche use-case. We seeming to be passed the time where we need more power for the general use case.


Fair point, but there are some trends in technology that it doesn't take into account.

As technology becomes more centralized, the computing power of your phone or laptop becomes much less important than the power of Google's and Amazon's data centers. Your devices essentially become clients for computation done in the cloud.

Technological advancements generally don't require increases in the client's computing power anymore.


It's the case now, but the trend toward the client+server model (terminal+mainframe, app+cloud, whatever it's named any given decade) versus the local model (personal computers, etc.) will probably be somewhat cyclical as fads are across industries. The contributing factors are wildly unpredictable... networks (or the internet) existing, privacy tolerance, battery breakthroughs, wealth distribution, who knows. We're cool with thin clients now, but it could go out of fashion the moment some confidentiality breach, mainstream, hits home a little too hard (for example).


"I use Siri all the time to set up reminders, but not much else."

Siri, Google Now, or Cortana are on the verge of becoming something quite useful. Hopefully, all three. Voice as a user interface is perfect for many tasks:

I use Siri all the time with Apple Music: https://h4labs.wordpress.com/2015/07/31/apple-musics-killer-...

Someday soon, we'll even use voice recognition to help write code:

https://www.extrahop.com/blog/2014/programming-by-voice-stay...

http://ergoemacs.org/emacs/using_voice_to_code.html


>Siri, Google Now, or Cortana are on the verge of becoming something quite useful.

That one of those things that are going to be down to personal preference. Personally I don't see voice as a suitable interface for anything. I get that for some minor tasks voice is going to be fast, but for larger workflows voice isn't a fast input or feedback method. If you don't have a screen in front of you, you'll need to keep a mental image of your actions, which seems to just make things harder. If you do have a screen then I don't get why you wouldn't have a keyboard and mouse to.

Honestly I just don't want to talk to my computer (or phone), I don't see it as an efficient input device. I still haven't tried Windows 10, but when I do the first thing I will be looking for is a way to make Cortana disappear. It's creepy as hell and not how I want to use a computer. Mostly is seems like a technology we implement be science fiction says we should.

Arguably there's are cases where voice is better, simply because it really is the only option, car stereos and navigation for example. It's really the only safe option.


I can see it now, an open plan office, 100 developers with noise cancelling headphones on, yelling at their computers because they can't tell how loud they are.


In the movie Her, there were several office scenes and a subway? scene with Joaquin Phoenix talking in a low voice to his computer. You probably need a directional noise canceling microphone, but not headphones. In the last link that I gave, there is a recommended microphone that Tavis Rudd uses.


I was very excited for Siri-Apple Music integration but I switched back recently. Queries like these didn't work:

    * Play top songs from 2010
    * Play some indie rock music 
    * Play Rihanna's most popular song
    * Play this month's top songs
Aside poor voice recognition, even after it got the input right, outcome was terrible.


Siri is very good, but not good enough to fix pop music from the last decade.


Before mocking me pay attention that those are samples. Music I listen to has nothing to do with it. I'm not here to tell people what I'm listening to.


I'm sorry, I was not mocking you, nor your music tastes. I was just making fun of the idea that you can be understood by a piece of software, yet be unhappy with the results. (Especially when it comes to music)

Think about saying: Siri: Play the best song from Justin Bieber.

And maybe being a bit disappointed that it is not quite as awesome as, say, a Led Zeppelin song...


Voice recognition is good as long as you speak english without a strong accent, which is a bit limiting for non-native speakers.


>"Think of how often you Undo every day. Undo makes lots of apps safe for exploration and regular use."

In 2015, when I see and use software that doesn't subtly prompt the user to undo actions for a short period of time after an interaction I just shake my head.

It's such an easy way to reduce customer service complaints it's ridiculous. Empirically, with the companies I have consulted, 'ridiculous' often means 20-40% instant reduction of complaints.


> In 2015, when I see and use software that doesn't subtly prompt the user to undo actions for a short period of time after an interaction I just shake my head.

Subtly prompting is not enough, keeping a recycle bin or undo-stack is much preferable. The android photo app is a great example of this. By sliding right or left you move to the next/previous photo. By sliding the photo up you delete it and then move to the next photo! The only notification of this is a subtle prompt at the bottom of the screen: "Photo deleted - Undo?"

I gave my tablet to my mom to browse through photos, guess what she did to browse to the next photo: swipe up. She didn't notice the subtle prompt at the bottom of the screen and ended up deleting half of my photos before she noticed it, by then it was too late as the undo only worked one step back. Luckily i had the photos stored elsewhere also.


This has to be the worst UI decision I've seen in any app ever.

Especially as when you're lying down the phone can switch between landscape and portrait seemingly randomly, and suddenly the exact same action for scrolling becomes delete


The Unix shell, one of the most successful user interfaces ever developed, doesn't support undo nor does it need to. It's successful because it doesn't presume to know better than the user.


> The Unix shell doesn't support undo

Oh but i would like it so much if it did. There would be no need for --dry-run options on commands, and it would be much easier to test command-line software and do exploratory learning.

I think it would be a great usability improvement. Even if it were "only" on a file-system change level. And even if weren't always enabled, like, maybe make it so you have to first "begin" a file-system transaction to be able to roll it back if you don't like the changes you did. Imagine something like:

  $ begin-tx
  tx$ ./some-script-i-am-not-sure-works-properly
  tx$ # Check if everything looks good.
  tx$ # In case everything went good:
  tx$ Ctrl+D # ...or "commit" to end the transaction.
  tx$ # Or, in case i see something went wrong:
  tx$ Ctrl+X # ...or "abort" to roll back the changes.
  $
(Disclaimer: maybe something like this already exists, is widely supported, and i've just been doing it wrong the whole time.)

Strangely, the only command-line software where i feel i can do exploratory learning, instead of always having to resort to the man pages, is git. With git, even if i'm making the most destructive history rewriting with `git filter-branch`, i know that if i screw up i can always run `git reflog` and checkout some ref that i know was OK.


You would need to implement this at the file-system level. You should be able to do this easily with any file-system that supports snapshotting, such as btrfs.


I don't think it's that easy and I do think the parent has a point. I mean, while you're busy writing a perl one-liner to munge on your file, ten daemons are writing to their log files, all on the same file system. You want to roll back the effects of your script doing an accidental rm ~/*, but not what's in the logs. Probably a better idea is to strace the shell's children and record changes, then apply or forget them.


> doesn't support undo nor does it need to

Anyone who's ever fat-fingered an rm command will disagree with you there.

Your comment reeks of some kind of "brogrammer" machismo and it makes me irrationally angry.


> The Unix shell, one of the most successful user interfaces ever developed

By what measure? Cause it sure isn't widespread end user adoption…


My iPad has been sidelining my MBP for some time. Installing the public beta of iOS9 - which includes cmnd. + tab switching between apps - accelerated that trend. And increasingly, I'm using Siri / voice input for more and more stuff. Now my biggest frustration is that my physical keyboard doesn't include a mic. button like the iPad's onscreen keyboard.

If anyone from Apple is reading this, not that you've got two empty spots on either side of the up arrow. The other one would be fine for emoji ;-)


Apple's latest design removed those empty spots.[1] But don't despair, you can already get a lot of what you want without dedicated buttons.

To open an emoji menu in OS X (10.10 or later IIRC), just focus on a text input and hit ^ + ⌘ + spacebar. You can also search for other unicode characters. "⌘" is a little tricky to find, since it's named "place of interest marker" or something like that.

OS X doesn't yet have Siri, but it does have Dictation. I assume that when Siri replaces Dictation, the typical keyboard shortcuts for toggling it will stay the same: double-tapping fn or ⌘.

1. http://geoff.greer.fm/photos/pics/IMG_1269.JPG


I see now why my mac-using friends love vim keybinds: those arrows look like a nightmare to use. ;)


I could really see Apple starting to deprecate the Mac with the release of the iPad Pro. "Now you can develop iOS on iOS! No emulator required!"


There are times when Xcode chugs on a top of the line 2015 15" 2.5Ghz rMBP with 16GB of RAM and a 2GB/s PCI-E SSD. Not frequently mind you, but it does happen. It's incredibly heavy and I can't see it being usable on an iPad unless Apple manages to magically pull out an A-series CPU that's in the same class as a Haswell i7.


I like that superposition of "Anti-mac's" Reality vs "Mac's" Metaphore. Dudes, even us, mere mortals only interact with "Reality" with interfaces through convenient metaphors delivered with imperfect senses to adaptive neural networks of the "mind". How can "Anti-mac" interface with Reality is beyond my understanding, some enlightenment would be much appreciated.


I am starting to see a trend of GUI being structure-less / hierarchy-less. A prime example is amazon's Music App for windows. It moves from screen to screen with no apparent logic, with some screens (like the download manager tab) appearing sometimes but with no obvious way to get back to it if we hide it. Same thing for the itunes app which doesn't show on a screen how we got to it. Both seem to be inspired from the web, except that the web pretty much always has a menu at the top which gives a good idea of where we are.

The Windows 10 UI also seems to follow this logic and seems to have ditched the idea to show any apparent hierarchy in the sub menus of the control panels. Particularly in full screen mode.

I can't tell if it is because I am used to deal with GUIs that have a clear hierarchy that this design upsets me, or if it is simply because it is badly designed. I guess the acid test is whether a kid can find his way comfortably.


What's the difference between an app and a program? Author talks about older programs and newer apps; syntactically it would seem to me that any program would have several 'applications.' But that doesn't seem to be how the word is used since everything wants to be an app today.


I think app is just a trendy word for application (or computer program) possibly with the implication that it has a user interface.

So application are old apps and apps are new applications.


> I own my iTunes library, don’t need a membership card, and I don’t exactly “borrow” items from it.

Depends on how many songs have DRM on them I guess. It's certainly not true for iTunes shows and movies


> You do less pointing-and-clicking; instead you tell the computer what you want.

Sign me up, can't stand all the pointing-and-clicking.


It's called a terminal. hehe.


We need a formal language that is a little more amenable than what we need to put in the terminal. But dictating to a comfortable hearing terminal would be great!


Oh yes. I spend much of my time there. To the point where I don't even have a graphical file manager let alone a desktop environment installed on my machine.

However. The shell, vim, tiling window managers, keyboard navigation plugins for the browser only seem to go so far and really only are an ideal interface for programming, reading, writing, and manipulating data.

To work on audio, images, or even use many web apps I have to accept a less fluid interface which is spatially laid out with visual elements representing actions I might take. There are hot keys but 1. They're usually not sensibly laid out and 2. they don't compose, so I have no choice but to dig around interfaces (which tend to be flat and modeless.)

It seems like I should be able to manipulate image or musical data in the fluid comfortable conposable hands off way I can approach programming tasks or even creative programming but the tools just aren't there. There's no real high level that I've seen interaction language for say.. Music or graphics. Such that there is no vim for music.

When I edit Clojure code for example in vim with vim-exp or paredit I'm not editing text.. I feel like I'm able to navigate and edit a structure that my editor and I both have a shared understanding of. A structure with a clear interaction language and representation. Working with graphics is a pain because I feel that I'm manipulating the artifacts rather than controlling inputs into a process. And while music creation software has come a long way there is a lot of visual noise, buttons that represent actions, and skeumorphic elements rather than a clean interaction language. I still prefer to at least begin most of my music working with hardware sequencers because they're focused and discrete.

There are some great tools working with Sound. (SuperCollider, Overtone) but those are programming environments more so than interactive ones.

I want to be able to describe part of an idea, supply some rough parameters and get some immediate feedback as to what that might look or sound like.. With the option of using whatever data I have at hand as an input. Then I want to be able to adjust those parameters, possibly even making their values a function of some other elements I've already made, and when I'm happy with it I'd then want to be able to select over, edit, and build off of the objects I've created.

Buttons that advertise actions are just noise to me.. I want the entire screen just to be feedback as to the state of my project and the action that I'm constructing.

Drag and drop.. Is a drag.. Why am I carrying icons across a computer screen? Why am I performing virtual manual labor? Copy and paste is a little better but I want to see what it is that I've grabbed before I place it. I want to be able to paste in multiple places and edit them all at once. And I want to be able to describe rather than having to indicate where I want things to go and how I want them to change.

All the "best" interfaces for creative work begin with the assumption that the user is going to point at things or drag their finger across a touch screen, hot keys are there for power users, but they mostly provide quick access to a pallete of isolated and uncomposable actions where you modify the artifacts in steps moving it closer to your vision.

Wouldn't it be nicer to modify the transformation you want to apply in steps?

Poking things, whether with a finger or a mouse pointer is almost certainly not the pinnacle of UI. It's an emulation extension of things people do with their hands. (Draw a picture with a pencil, cut and splice tape, pencil notes onto sheet music.) Lots of where and not enough what.

Lots of showing the computer what you want done, or really doing it manually with the provided tools, rather than telling the machine what to do or describing what you want.

The speech recognition and natural language processing work that's being done is promising. Human languages can be written as well as spoken though. (And written with a lot less ambiguity.)

I'm going to keep clinging to my array of general purpose buttons that I've built muscle memory for and do what I can to make tools for it. I feel very strongly that the interactive picture book is not an actual improvement on language based interfaces though. It's different and it's a recent development based on recent technology, but it's really a separate thing altogether even though society is chasing the trend and talking about it as though it is the best man-machine interface we have to date.

The keyboard sucks. It gives you RSI, it's big, it's noisy. But when you know it you know it. You can go in any of 101 different directions instantaneously when using it as an array of buttons, or when using it to actually type.. Well it's infinite. No features ever need to get dropped because their button was wasting space on the screen. Nothing has to waste space on the screen. You can read the doc, refer back to it when needed, memorize what you need and discard the rest, maybe peruse it periodically to update your software.

Well that's my 11 cents. Rant over. I have to confess to being one of those people with a display approaching the size of a desk (that I have no desire to ever touch for any reason).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: