Hacker News new | past | comments | ask | show | jobs | submit login
The Anti-Mac User Interface (1996) (nngroup.com)
295 points by goranmoomin on Dec 11, 2020 | hide | past | favorite | 149 comments



> At recent user interface conferences, several speakers have lamented that the human interface is stuck. We seem to have settled on the WIMP (windows, icons, menus, pointer) model, and there is very little real innovation in interface design anymore.

Alternately, you can say that UX design stabilized on a known good pattern.

This reminds me of how some people will refer to stable software projects that only receive the occasional security patch as "abandoned". They're not abandoned, they're just stable.


I completely agree. The design of a hammer hasn't changed much in centuries because it's already designed well for its purpose. Same thing with sharks, I don't hear biologists say "there hasn't been much innovation in sharks for millions of years."


The design of a hammer has actually changed substantially over time. More recently, there's been a trend towards specialization in the framing hammer alone. 100 years ago most hammers looked like the classic hammers I remember from childhood: relatively small, sharply curved claws, a smooth face. As production framing exploded in the US after WWII, framers were pushing for more efficiency.

The framing hammer got heavier with a longer handle, broader face, straighter claw, and waffled face better for gripping nails (also why most loose framing nails have a cross-hatched pattern: so they can mate with the hammer face). From there, materials science really kicked in, and we saw steel-handled models, followed by fiberglass and other composite handles.

The latest developments (that I'm aware of) are products like the Stiletto (http://www.stiletto.com/p-80-ti-bone-iii-hammer-with-milled-...), which leverage materials like titanium to reduce weight while maintaining driving power, and include a replaceable steel face to prolong hammer life and allow using different faces for different applications.

Modern hammers with advanced material properties and functions can cost hundreds of dollars, but deliver much higher efficiency with less fatigue and a longer life. I compare that with the Sears hammer in my grandfather's garage and see a whole new generation of evolution.

There's a great article about hammer history at Fine Homebuilding: https://www.finehomebuilding.com/project-guides/framing/hamm...

edited: Fix link formatting.


Noted. What has been stable for a long time and remains frustratingly static and unrefined is the field of surgcial instruments. There are a few exceptions but most haven't changed in the 20years I have been a surgeon. The regulatory costs of developing new instruments and intertia of manufacturers doesnt help. Once a manufacturer gains a market share they try and keep by not innovating which keeps their deveopment costs down and the consumer volume is comparatively low. I don't get the feeling there is any sense of patient altruism. I have been using the same 4mm dia endoscope 25cm long since 2007 when the first iphone came out. It is very simple, made of glass fibres with a sony 3chip camera that is 10y old. We get some v slow progression on video output and have just got 4k. Compared to consumer tech the advances are ridiculously slow. We need a camera (the iphone camera is small enough) on a steerable stick. How hard an engineering task can that be?.


A couple of weeks ago I noticed we had several active farmers here, today a surgeon with twenty years of experience :-)

I'm happy to see so many different people here in addition to programmers, techies, VC etc.


> The regulatory costs of developing new instruments and intertia of manufacturers doesnt help.

I work on (mostly non-critical) medical devices and I would like to remind that regulatory is there for a purpose.

If anything the 737 Max fiasco should remind people what can happen when regulatory is considered a cost that need to be cut in a field where there can be some hazards, and where some people in the chain do not have the best interest of the public in mind.

And yes, in the medical industry too, there can be some people who care more about optimizing profits than about patients.

Maybe there are some undue regulatory rules, but hopefully there are not the majority.

In the case of what I work on, regulatory does not prevent us from using state of the art CPUs and GPUs, so yes R&D may need somehow longer cycles from consumer electronics to get a return of investment, but let's be honest it is not too much scandalous to get some features a few years after you get similar things in consumer electronics, especially in cases where there are diminishing returns of improving X or Y.

And yes it is easier to build things when you don't have to care about e.g. ability to disinfect materials, you can cope with more bugs, etc.


Half of that is the manor manufacturer, and the other half is the FDA. These huge companies that can afford lots of money intentionally get the FDA to set a high bar with lots of expensive testing in the regulations. This creates a HUGE barrier to entry. Right now I'm working with a pathology lab who are buying a pathology slide scanner. This is basically a slide handling robot with a high end camera hooked to a PC with an image viewer. The combined system costs over $300,000 from companies like Leica and Philips. The software is incredibly basic, it's basically Thumbs Plus with Irfanview and that's $100,000 alone, aside from the scanner. But they charge that because it's incredibly expensive for a startup to come in and challenge their pricing.


I don't suppose you'd count something like the da Vinci robots as a step forward? I got to try out a demo unit once (on a dummy! not on a real human), really cool stuff. I'm really glad I tried it _after_ I had my laparoscopic appendectomy - it was somehow a little terrifying how clumsy it felt holding the "regular" tools used, compared to using the robot.


Yes, tho they have been around now for about 20y, are big (no good for eg neurosurgery, ent) and cost $2m. You don't need that for lap appendicectomy, - just a better scope, and a good surgeon. Robots have no advantage over human dexterity. They have a few niche roles eg where we can't get our hands in eg the prostate.


The metaphor of a hammer has barely changed over time. What you're describing are implementation improvements - not trivial, but you could take a modern hammer back a few centuries and it would still be recognisable as a hammer. Albeit a very unusual one.

UX/UI has the same issues. Everything is a metaphor anyway. You don't get to choose whether your interface is a metaphor, because there is no other option for interfaces. You only get to choose the type of metaphor, and its affordances - from hand-editing binary in a "file" (...which is also a metaphor) to voice recognition.

There are some good points in the article, but they're maybe 10% of the way to a full understanding of this issue. Most of the complaints are about inconsistencies and expert-level operation (written scripting) vs beginner-level operation. But there's also a point about contextual metadata.

Modern operating systems are pretty bad at all of the above, but that's because designing intuitive and powerful interfaces that incorporate expert-level features with some workable built-in intelligence - and preferably some form of composability - is incredibly hard.

It's so hard it's barely been attempted, never mind done successfully. So most operations in userland are explicitly task-oriented. Their settings are customisable, but not the menu of operations on offer.

As a non-expert if you want to rename a folder full of files, you buy a file renamer. You don't try to write a script, because even trivial scripting requires a level of comfort with abstractions that most users simply don't have.

Experts do have that skill level, but they can already use $scripting_lang.

It's possible to imagine an OS that would be more open-ended and wouldn't silo data inside task-oriented applications. But this runs into huge problems with efficient representations and the most appropriate schema for each domain, and just the concept on its own is far outside anything most users would want to deal with.


That’s a really good counter example because it’s still easily and immediately recognisable as a hammer. Anyone picking it up knows how to use it without even thinking about it. In fact they might not even concously notice the changes and how they improve the tool except that it seems better. That’s great incremental UX improvement.

A lot of the critics of lack of innovation in computer UX design don’t want incremental improvements of the existing building blocks of modern UIs, they want to tear it all down and start from scratch. They want VR interfaces, Jeff Raskins The Humane Environment, Button-less UI, etc. They don't care about better hammers, they want nail screwdrivers or handle-less hammers.


The think this supports the OPs argument that we're refining, not inventing. Everything you've said here is about refining, it's not a completely new way to attach materials together.


Yeah, probably should have clarified that I intended no comment on the OPs comment about innovation in computer UI. Just wanted to point out that there's actually a surprising amount of evolution in design of the hammer.

Something that seems to be so simple, and has existed for thousands of years, can still be made better. I'm not a professional carpenter, but I've used a hammer a lot to do things like framing, and can confirm that many of these innovations are meaningful in function, not just form.


Going back to the OP though that talked about settling on the WIMP model, you're not really contradicting their point.

If you take a hammer from 1920 and lay it next to the most jazzed up hammer from 2020, they would be recognised as the same tool/having the same general purpose. A carpenter from 1920 wouldn't need to change the way he used a hammer if he picked up the 2020 model, even if the 2020 model might enable new ways of actually using it (or improve old ways of using it).

So while there is evolution and development going on, we're not replacing the hammer metaphor as it were.

The WIMP model has also seen evolution and refinement, but it's still recognisable as the same model. I think the analogy holds.


It may be splitting hairs, but I think certain changes to hammers must qualify as invention, certainly, including the crosshatch innovation, the material science involved for both fiberglass and titanium handles, and the improved weight distribution. It's not clear to me what would be considered a complete reimagining of the hammer, as a hammer is such a broad category of tool. Is a mallet a hammer? When I smack something with the backside of an impact driver, is it a hammer? I sure use wrenches as hammers occasionally.

So, what's the line between inventing and refining?


A slide hammer, a dead blow hammer, a flooring hammer, a nail gun, a staple gun, liquid nails/glue, screws and a screw driver (powered or not), a jackhammer, a power chisel.

If the idea behind a hammer is to use the momentum of a relatively large mass to drive a relatively small mass into a material, then the idea of a piece of steel on a handle is just the simplest thing you can manufacture as admittedly a versatile one but not necessarily the best one. If your task as the user is to join two materials together then hammer and nail won’t necessarily even look like hammer and nail (glue, screws). If the goal is to separate material like you might with a chisel, depending on the material you might not be using a manual hammer but something that looks very different, like a saw, a file, a jackhammer, etc.

What the person who mentioned the evolution of framing hammers is pointing out refinement of the hammer as it is. Creating a tool better suited to the user’s task is closer to what TFA is about.


The marketing terms here are continuous innovation (that doesn't change user behavior) vs discontinuous innovation (that changes user behavior).


I just want to say thank you for posting this. I had no idea about the evolution of hammers, and this little bit of depth from an unfamiliar domain brightened my day :)


Right but a hammer is still a hammer right?

It's not like we're bashing nails with bricks and calling it an MVP replacement to the hammer.

What you described isn't innovation, its iteration.


They're actually not that different. specialization not withstanding, it's a handle with a perpendicular weight at the end. 40% down this page is one from 20,000 years ago, instantly recognizable as a hammer. http://oldeuropeanculture.blogspot.com/2015/12/baba-hammer-a...


And furthermore, most nails aren't driven with a hammer at all, since the invention of the nail gun.


“Longer this, shorter that, better materials” is refinement, not “substantial redesign”.

The point is the essential form and function has remained the same for thousands of years. Knives, forks, spoons etc. are still knives, forks, spoons etc.


Thanks. I feel I know everything about hammers now.


The hammer analogy has its limitations here. When the task is simple (hitting a nail), a simple and good design lasts. When the task(s) is/are complicated, depend on context, are tied to human understanding, knowledge or training in using machines, then design could benefit from both refinements as well as radical and creative thinking. I also don’t think that just because we have a good design, even for a hammer, we should stop people from challenging norms and thinking in bold ways. Here’s a contrasting example to hammers - chairs. The objective is simple, to sit on it. The design innovations still continue to this day, on how a chair can be structured. There have been interpretations of chairs that challenge the very notion that chairs need legs.


And yet being a shorter person I still have to hunt far and wide for a chair that doesn't leave my feet dangling or cut into my thighs. Lately I've taken to cutting things down or upcycling to get a reasonable fit.


I consider myself one of those people who would comment about the design of hammer being stuck and more people should, that is how we innovate. The real world is always changing and evolving with time, so are technologies be it for computers - UI that allowed for the desktop GUI paradigm in the 90's to today where Alexa, Siri can help us do lot of those tasks just by commanding over voice. We need to certainly move on and innovate the user interface for desktop GUI or for other devices.


What is new proposal for a hammer, may I ask?


Instead of whamming in a pointy cylinder, that stress and vibration might work it out back against the force of our whamming, we TWIST in a similar bit of hardware, only we wrap an inclined plane around the cylinder so that it cuts as we twist, making a walled channel that resists working out backwards under vibration.


Witty, but nails and screws serve different use-cases. Nails have better shear properties, for example.


Pneumatic press, nail gun, screwdriver, impact wrench, seems like these could work, while maybe not a direct dirivative. Some things persist, and are stable as others have noted. A cool thing about innovation is that the ideas are often things people didn't know they wanted until they see it.


These are not improvements of a hammer design, but completely different things. Screws do not play well with deformation, but let’s ignore that: you’ll probably have both a hammer and one of these things in your toolbox, and you’ll miss a hammer if it’s not there. It is a general-purpose hit-force tool for hitting anything, not only nails. A better design for a hammer itself could include a claw or new hitting surface geometry, handle amortizer and so on, which are successfully done in variety, despite gp thinks that it’s “stuck”. First, it’s not, we just waited for better materials, advanced tasks, etc. Second, it is not much to do with hitting something with inertia-accumulated force.

Anyway my point is that if you change all hammers in the world overnight (like software does) you better have done a good job of century-testing your changes in all situations. If your reasoning is just “it gets old”, well, this site’s rules do not allow me to express what I think of that.


But can’t you just question the need for a hammer in the first place. You don’t need one if you don’t have nails. You don’t need nails if you find a better way to fasten things. That’s the job to be done. Soo instead of a better hammer, let’s get rid of the nails. And with it goes the entire industry.


But why? Hammer/nail is essential, cheap and natural for crafting, and honestly I do programming in a somewhat handcrafted way, i.e. I don’t need sophisticated stock market frameworks for e.g. sending a request for quotelevel2. And many people seem to do that, that’s why deprecating ‘request’ instantly gave birth/popularity to ‘got’. There is no reason to complicate simple, battle-tested things until there really, really is a big reason. There is no such thing as “new”, until your task is “new” in any way. Using new things for the same old purpose is called fast fashion.


Ok, let's play do this imaginative exercise - Hammer looks simple but actually if you're few inches off while slamming it down, you can seriously injure yourself if you're holding the nails with your other hand. I'm sure we have all experienced that. How about a sensor that always drives the motion in the right location, it can be done by having sensors that feel your muscle twitches and angle then recalibrate using camera to the right angle and nails it in (pun intended). Another idea is a Swiss army knife but something that combines hammer, screwdriver, wrench into one design design offsetting the need to by separate tools.


Or you could just put a magnet and a notch on the top of the hammer[1].

[1] https://duckduckgo.com/?t=ffsb&q=hammer+nail+magnet&iax=imag...


I've never yet managed to use that thing without pinging the nail across the room.

Which does explain why OSHA says you have to wear safety glasses when nailing.


This is a parody, right?


Many framing hammers have a nail set on top which is a t-shaped depression with a magnet in it to hold a nail. Set the nail on top, swing and set the nail int he wood up to half depth, then hit it two more times to finish nailing.


Idk what professional woodworkers do today, but I just hit very easy for first few times (bait? non-native speaker) and then the nail holds with its “spearhead”, so you can remove your hand away from the danger zone. Or use pliers. I agree with the commenter above, hitting fingers is frankly a noob’s issue that is completely avoidable without rocket tech.


> where Alexa, Siri can help us do lot of those tasks just by commanding over voice

How "many of these tasks" can they reliably help you do? They screw up even the simplest tasks like setting a timer.


Even if that were true (which it doesn't appear to be... see other comments) I'm not sure that would apply to a field as broad, and as new, as software.

Software does radically different things, things as different as, say, hammering and opening cans. It is difficult for me to believe that the WIMP interface as we know it is actually optimal for all those different software tasks.

I mean, sure, you could probably open a can with a claw hammer, if you used some care, and you might be even able to drive a nail with a can opener.

You wouldn't want to, though.

Note that we still drive cars by using steering wheels and foot pedals. We haven't gone to some "click on the menu item" interface, even though such an interface could easily be written for many modern cars.

Put me in the camp that believes that UIs are stuck in a rut, and need to be fundamentally rethought.

Raskin's humane interface had some interesting ideas, though it does not seem to have caught on.


On the other hand, sometimes it's just that people get so used to something, they don't take a close look at how it could be improved. Bows stayed much the same for centuries until the compound bow appeared in the 1960s.


It’s sad that things got so interconnected that I expect hammers to require security updates any time now. I hope there will be a counter-trend. It is feature bloat scaled to the extreme, at industry level.


Nailgun.


Im not of the opinion that we've found the "best" user interface. I think the biggest hurdle is that "best" is so different to different people. One part is familiarity, as many power users know, your setup would be utterly useless to an unfamiliar person. I guess the current paradigm is a local maxima of the convergence of discoverability and efficiency and still working on our diverse device specifications (display sizes, input devices and performance).


Just look at browsers — no more status bar, title bar reused for tabs or controls, hidden menu bar.

Web does not value menus and window manipulations either. I don't get it too, tiling window manager optimizes geometry, multiple desktops available by shortcut.

Shell is a manual mode of automation tool. It stores history, I can easily call previous command, combine several commands into new one.


If you read the article, much of criticisms about the state of the art at the time (mid-90s) has been addressed, and to great benefit. To take just one example, the use of natural language via voice commands to allow the user to discover new commands beyond what can be expressed with a visual UI - but there are many more examples.


The discoverability of voice commands is terrible.

I have no idea how to find out what I can do with Alexa, or which particular magic phrasing Alexa will understand.

My personal biggest issue, as means of illustration, is that I have no idea how to play the latest episode of something on the BBC app, rather than continuing from where I left off.

I'm sure there is a way, but none of my "natural language" attempts are recognised, and I can't find any way to find out what the true commands might be. Given that I mostly listen to news shows while doing something else (and hence rarely finish a particular episode) this makes the whole system near useless.

I may be missing something basic, or if you have any tips on the discoverability of Alexa commands, I'd be very grateful. But in my experience, the discoverability of voice commands is the worst I've ever come across; at least with CLI you have man pages, and with a GUI, even in the worst case of no documentation and obscure icons, there are buttons to explore through trial and error.


It's a problem that will be solved better and better as technology improves. GPT-3 is holding a lot of promise in taking input from one domain (language) and generating an outcome in a different domain (commands).

Not sure how bad/good Alexa is, but everyone (myself included) in my household are talking to Google and Siri all the time to do tasks that would have taken multiple clicks on a screen, and a bunch of typing. A huge reduction in friction. The state of the art isn't perfect, but as it improves, it's undoubtedly taking UX in a direction that is popular and desirable to consumers.


Does anyone know any interesting attempts of alternative models to WIMP? Everytime I think about it WIMP just makes more sense to me (productivity wise), it's really hard to think outside the box for this one..


Instead of pointer, you can do keyboard-everything, which some UX enhancements go for, like the browser extension Tridactyl.

If you look at a lot of interfaces that are built for a specialized power user (e.g. cashiers), they avoid pointers and have keys for everything. Also, AutoCAD, the last I used it, looked to be centered around command-line primacy.


Sadly, POS terminals these days seem to be going in the direction of high-latency touch interfaces. Those old curses-based terminals were so fast to get things done in.

Magit[1] is another modern example of a TUI done right: discoverable, good dwim[2] inference that doesn't get in the way of experts, plus an escape hatch for typing out the exact git commands for those 5% usecases.

[1] https://magit.vc/

[2] do what I mean


The research done for the original Macintosh UI showed that keyboard users aren’t actually faster than mouse users, but think they are because they lose track of time while concentrating.


Do you have a link? I suspect there's some asterisks there. I used to operate a photo minilab and could process a roll's worth of photos in 1-2 minutes. That's 4 seconds at the outside to evaluate a photo, make brightness and color corrections, next photo. No way I'd be able to sustain the same rate by having to mouse around and click at least four different targets, bouncing from brightness to magenta/green to blue/yellow back to magenta/green to cyan/red before giving brightness a final tweak. Fitt's Law[1] is death to speed for all sorts of workflows.

I'm not saying that keyboard-based operation is superior in all cases, but a good keyboard-centric interface can eliminate the need to acquire a target (e.g. menu/toolbar item) for the most common operations because there's a hotkey. Well-understood operations can go almost at the speed of thought (either the operator's or the machine's).

[1] https://en.wikipedia.org/wiki/Fitts%27s_law



Thanks for the link. Unfortunately without any details on the experiment design it's hard for us to get anything out of it. What was the task, who were the participants, what was their familiarity with the software they were being tested on?

I can say that this talk of using the keyboard being so fascinating that it takes up significant mental resources to be not represent my experience using and seeing others use keyboard-centric interfaces. When I use magit, or when I was operating the minilab I mentioned above, I don't have to think about what key to press to do the thing I want. I am in fact "so disengaged [with the mechanics of manipulating the interface] that [I] have been able to continue thinking about the task they are trying to accomplish". Competitive StarCraft is another example that illustrates the same point without relying on personal anecdote.


The late Jef Raskin's Humane Interface had some interesting ideas, though it does not seem to have caught on.

It might be possible to do a smoother and more effective version now that we have CSS3 and all these cool transformations available.


A command-based interface is one alternative, either a text box or voice control (Siri/Cortana/Alexa,Google assistant etc.). But it's always something you sacrifice. In that case I think it's overview, especially when multi tasking. Another is full screen applications, pretty much what we have on phones and tablets. Accessable VR/AR might open up new models.


Speech, Neural lace, AR or VR.

I imagine the next revolution in UI will be because our computers change form and present in a completely different way, for example a virtual assistant that lives in the cloud and talks to you via AR visualisations and plain old speech.

Revolutions are not usually a reworking of the dominant mode but a displacement to another medium - e.g. iphone replacing computers for many people.


They are local maxima for normal people. No reason to believe better global maxima don’t exist, or that they can’t be improved to work better for more people (e.g., better accessibility features).


> Alternately, you can say that UX design stabilized on a known good pattern.

Or maybe known-good-enough: We evolved this specific design through a path-dependent fashion, so it could have been different had other designs survived, but nobody else has come up with a new UX paradigm which is sufficiently better to displace it yet.

Various kinds of large dinosaurs were dominant for millions of years, and birds still exist.


“There is no such thing as a dysfunctional system, because every system is perfectly aligned to achieve the results it gets” - Ronald Heifetz


I agree, these new metaphors survived the test of time. Probably because of it's flexibility and ability to change to user requirements.

Flat design is kind of like an extreme version of this. It's easy to build and change when needed.


Not good, just “good enough”.

UX, like evolution, once it goes down one particular path, tends to get stuck there, fiddling with the details at best. Radical innovation becomes really hard to effect: in evolution’s case because any new feature can only extend/adapt what is already there; in UI’s case because users tend to reject anything that doesn’t fit into what they already know.

It’s the distinction between stability and stagnancy. Stability is good in that it’s predictable; its benefit vs cost ratio is known. Stagnancy is not so hot: that ratio cannot (or will not) improve. WIMP is both stable and stagnant; trapped by its own early success with no obvious path forward.

.

Very relevant: after an early 8-bit dalliance I cut my adult teeth on Macs. Some of WIMP’s productivity gains were significant, but in other aspects it was just the same (or more!) drudge work in a cutsier skin. it wasn’t until I taught myself automation (via frustration and AppleScript) that I really put a decent dent in the latter.

And these were automations that built on my existing understanding of WIMP applications (unlike, say, the nix CLI which ignores all that knowledge and invents a whole new unrelated world entirely from scratch). All the Models were exactly the same; all my knowledge of how to manipulate my data in those apps was fully transferrable, not to mention all my existing documents. The only difference was the View-Controller I was using: RPC vs GUI. And whenever I got to a point in my workflow where it was easier/necessary to do something manually, I could freely switch back and forth between those two UIs.

Achieving 10x productivity gains over WIMP on frequent repetitive tasks is embarrassingly trivial* with even modest automations. The hard part is creating an automation UX that’s efficient and accessible to the large majority of less/non-technical users (AppleScript failed, but at least it tried).

.

When will we see another attempt? Dog knows. Voice tech like Siri is obviously trying, but is starting from the hardest end of the problem and trying to work back from there.

I believe there’s much quicker, easier pickings to be had by revisiting the AppleScript strategy—“server” applications exposing multiple View-Controllers for different interaction modes, and a really simple, textual “client” command language along the lines of Papert’s Logo (which 8 year-olds could learn how to use and compose), combined with modern auto-suggest, auto-correct, auto-complete to provide the transparency and discoverability that traditional CLIs fail so hard at.

The written word has 10,000 years of learning and practice behind it. And the most powerful word in the world is the word that expresses exactly what you want to say, whenever you want to say it. If that’s not an opportunity for some smart young coders with a desire to make a better world for all, I don’t know what is. You just gotta know history is all.

--

“It’s a curious thing about our industry: not only do we not learn from our mistakes, we also don’t learn from our successes.” – Keith Braithwaite


Apple's other attempt right now (on iOS) is Shortcuts, which is graphical-programming-y. But it's disadvantaged a bit because it started life (as a third-party app Workflow) outside the system, unlike Applescript.


I didn’t go into Shortcuts as it has its own set of flaws: poor granularity, poor composability, excessively complex and expensive to extend. And it’s still not quite clear how Apple mean to position it so that it connects to users’ aspirations and needs. Within Siri? within Apps? In between? All fixable, but ultimately depends on Apple’s priorities and investment, not to mention how good a handle they have on the problem themselves.

What Shortcuts does undeniably have is youth, looks, and an established following; and never underestimate the value of those. AppleScript may be built on a better technical foundation, but that don’t mean squat if it can’t bums on seats. And the bottom fell out the AppleScript market a decade ago.

However, being an outside product is absolutely no disadvantage. I’ll rate a passionate team of third-party devs with a vision over in-house chair-warmers going through vague motions with zero direction or objective. Being within Apple can be a huge advantage in that it offers prime positioning within the OS itself; but that’s of no use if you’ve got no clue how to deliver a desirable product and sell it to customers in the first place (<cough>Soghoian</cough>).

Whatever the strengths and weaknesses of their product, the Shortcuts team cut their teeth and proved themselves out in the real world. I don’t doubt Apple bought WorkflowHQ as much to get those people as their product. As change of blood goes that was badly overdue.


This comment is full of wisdom


My favourite part of this article (disclosure: I've made almost this exact comment before elsewhere):

> The see-and-point principle states that users interact with the computer by pointing at the objects they can see on the screen. It's as if we have thrown away a million years of evolution, lost our facility with expressive language, and been reduced to pointing at objects in the immediate environment. Mouse buttons and modifier keys give us a vocabulary equivalent to a few different grunts. We have lost all the power of language, and can no longer talk about objects that are not immediately visible (all files more than one week old), objects that don't exist yet (future messages from my boss), or unknown objects (any guides to restaurants in Boston).

Does this mean a commandline is always better, because it's more expressive? No, it's a trade-off between this and ease of learning:

> The GUIs of contemporary applications are generally well designed for ease of learning, but there often is a trade-off between ease of learning on one hand, and ease of use, power, and flexibility on the other hand. Although you could imagine a society where language was easy to learn because people communicated by pointing to words and icons on large menus they carried about, humans have instead chosen to invest many years in mastering a rich and complex language.

There's a neat coincidence that illustrates this tradeoff. While this article says:

> If we want to order food in a country where we don't know the language at all, we're forced to go into the kitchen and use a see-and-point interface. With a little understanding of the language, we can point at menus to select our dinner from the dining room. But language allows us to discuss exactly what we would like to eat with the waiter or chef.

Joel Spolsky's User Interface Design for Programmers (an excellent book, looks like it's available online in full on his blog: https://www.joelonsoftware.com/2001/10/24/user-interface-des...) says:

> Using a command-line interface is like having to learn the complete Korean language just to order food in the Seoul branch of McDonalds. Using a menu-based interface is like being able to point to the food you want and grunt and nod your head: it conveys the same information with no learning curve.

I'm pretty sure it's a coincidence that the same example of a restaurant in a foreign country is used, but despite apparent contradiction, they make the same point: The text interface requires more time to learn, in return for being more expressive and powerful. Whether that's worth it to you depends on how long you intend to live in that environment, and how rich an experience you'd like to have.


> > Using a command-line interface is like having to learn the complete Korean language just to order food in the Seoul branch of McDonalds. Using a menu-based interface is like being able to point to the food you want and grunt and nod your head: it conveys the same information with no learning curve.

That analogy makes one significant assumption: that you are trying to order food from McDonalds, which is a thing you are used to doing at place you are already familiar with. Of course it's more difficult to learn Korean when you just want to order some American food, but what happens when you try to order Korean food? What if you are trying to do something more complicated than order food? There are cases where learning Korean would be clearly beneficial.

The limitations of GUIs stand out most in technical tools. When the thing a user is trying to do is already complicated, it doesn't really help to give them a simplified menu system: that just provides a frustrating amount of options.

There has been a significant prejudice in UI/UX design to optimize for approachability, even when expressiveness is a better target.

Ordering food is a great use case for "point and nod" UX. The pointing and nodding only needs to occur a small number of times. There is much more need for approachability than expressiveness. To contrast, editing text is a terrible use case. A point and nod system would involve a fatiguing amount of pointing and nodding. There are reasons that tools like Vim and Emacs are widely popular, even with their steep learning curves.


There are ways to do this even with GUIs, the prime example being "expert" software like 3D graphics tools like Maya or Blender or things like CAD although CAD's origins were command oriented.


Blender does it especially well. The only thing it's missing when compared to command-line utilities is composability with other tools like pipes. Blender has tons of features, and it has to because of its monolithic design.

Then again, collaborative (open source) design is the workaround for monolithic software, and it's worked exceptionally well for Linux and Blender.


Absolutely; to tweak the analogy a bit, you wouldn't have much luck with the point-and-nod "interface" if you were trying to do something more abstract or complicated, like getting a loan or hiring a gardener.


GUIs are also fundamentally more expressive for some types of work, e.g., Photoshop.


For output graphics can express a lot of complexity quickly. Consider a text vs graphical adventure for example.


Jef Raskin—who led the Macintosh project for the first year (though apparently the design was changed further later on)—worked afterwards on a concept that incorporated typed-in natural-language commands. In his Archy, commands were part of an unorthodox GUI system.

https://en.wikipedia.org/wiki/Archy

Archy clearly hails from the eighties when it was still imaginable to change users' workflow with desktop computers—that users would type a command while holding a special key (also apparently the author wasn't a touch typist).

The commands feature, isolated, was later adapted by Jef's son Aza Raskin, first as Enso (if I'm not mistaken), and then as a Firefox extension Ubiquity.

https://en.wikipedia.org/wiki/Ubiquity_(Firefox)


This post introduced me to Archy which is exciting to discover. The zoomable interface of Archy addresses something fundamental that always felt off about current desktop GUIs - the inability to get a meaningful overview, that include both folder structure and file contents , and to be the inability to switch from higher to lower levels of perspective in a continuous way. It seems like Archy made the directory structure appear like a topographical landscape. Current desktop GUIs instead feel like navigating an endless Russian doll of drawers.


‘ZUI’ is a general approach, and there were a number of attempts at it. I've seen a bunch of ‘zoomable’ file managers here and there—however, since that's not my cup of tea, I don't actually remember any of them. But they exist, so I guess some googling might point you to them.

Also I've heard about some ‘pasteboard’ service as an alternative to a internal corporate wiki: you likewise zoom around and slap content anywhere on the infinite canvas. The trick, of course, is to keep the content organized thematically and tidy it up so it can be found later. Again, chances are slim that I will remember the name of the service now.

‘Rizzoma’, which is a clone of the late Google Wave, may also be seen as ‘zoomable’


In case you aren't aware Eagle Mode [0] (a self-described advanced zoomable interface) exists.

[0]http://eaglemode.sourceforge.net/screenshots.html


This inspired some of our design principles in the CLI Guidelines: https://clig.dev/

The CLI is the opposite of the Mac in a lot of ways -- reality instead of metaphors, remember and type instead of see and point, make it a conversation, and so on.


>Because people don’t understand what computing is about, they think they have it in the iPhone, and that illusion is as bad as the illusion that ‘Guitar Hero’ is the same as a real guitar.”

For me, this was the signal that this as a "if only people used computers like I personally think they should" pieces. I know lots of people who play guitar hero, lots of people who play real guitars, and some who play both (me for one, mediocrely in both cases). I have met zero people who think guitar hero = guitar. They are different things that serve different needs.

I use a cli to compute all day, every day. Can't imagine using computers without one. I have no empirical proof of this, but I would be willing to wager the deed to my house that the vast majority of users, having been explained what a cli is, the benefits of it, and how to use it, would chose to never use a CLI again. It's not what they want, it's what other people want and it's a myopic view of what computing is.


I believe Alan Kay is saying that while people know Guitar Hero isn't real guitar, they don't know that using an iPhone isn't "what computing is about". Here's the full quote from him:

  If people could understand what computing was about, the iPhone would not be a bad thing. But because people don’t understand what computing is about, they think they have it in the iPhone, and that illusion is as bad as the illusion that Guitar Hero is the same as a real guitar. That’s the simple long and the short of it.

  What’s interesting is, the computational ability of an iPhone is far beyond what we need to do good computing. What you wind up with is something that has enough stuff on it and is connected to enough stuff, so it seems like the entire thing.
https://www.fastcompany.com/40435064/what-alan-kay-thinks-ab...


This led me to Kay's full quote on ad blocking:

"A combination of this 'carry anywhere' device and a global information utility such as the ARPA network or two-way cable TV, will bring the libraries and schools (not to mention stores and billboards) of the world to the home. One can imagine one of the first programs an owner will write is a filter to eliminate advertising!"

[Kay72] "A Personal Computer for Children of All Ages"


> Can't imagine using computers without one. I have no empirical proof of this, but I would be willing to wager the deed to my house that the vast majority of users, having been explained what a cli is, the benefits of it, and how to use it, would chose to never use a CLI again.

As stated, I agree.

But "ugliness" and unfamiliarity are big confounds here.

The fairer experiment would be something like: For tasks where a CLI app is a better fit than a GUI, would a person trained in the CLI and forced to use it for, say, a day (however long it took to become comfortable and see productivity benefits) then decide to keep using it, or go back to a GUI?

Imagine an office worker and some cumbersome workflow with excel, microsoft word, and GUI folders.


I think your question has been answered to some extent. As I understand it, there is wide spread usage of WeChat apps. Users of WeChat use the platform for many tasks other than text messaging. In, essence, WeChat is just a spruced up CLI.


> "if only people used computers like I personally think they should"

I think that if one could take a sufficiently aggregated superset of that from everyone on HN, one might very well come up with a philosophy of computing that actually is quite powerful.

In general I think the idea is that the computer is supposed to be a tool, in addition to a toy; with the awareness that playing with tools is often just as much fun as any toy could be, and conversely that a toy is a powerful way to learn.

> "Because people don’t understand what computing is about, they think they have it in the iPhone"

I agree this is myopic. I can use my iPhone to do a great many computing-related tasks; and when it is insufficient, it is certainly sufficient to reach out to a bigger, more powerful machine where I can do such things. As long as we retain the powerful (and not yet enshrined) freedom to connect things together over the Internet, your "computer" is not just whatever device you hold in your hands.


I don't think the cli is any less of a metaphor than the desktop is. Its just a different metaphor.

Its not like shell pipelines are really pipes between two processes or a filesystem is the same as a physical system of files.


A pipe(2) in Unix is actually a piece of memory arranged so that it can be used as a FIFO, written and read to. There really is a physical object corresponding to the pipe, and the "|" refers directly if tersely to that object.

A filesystem is fundamentally a way toorganize blocks of data on a storage medium. It consists of an actual physical medium with various attributes, which is used by a rule-driven system ("the filesystem") to decide where to put data (and conversely where to find it). It doesn't actually work in the same way as a paper filing cabinet, but in most operational senses, the two things are far closer together than they are different.

The CLI is not a metaphor - it's an abstraction. It removes details that you don't need to know about (mostly), but provides you with a way to operate directly upon the objects (concepts) known to the operating system that you are interacting with.

The classic Mac desktop described in TFA does consist of a lot of metaphors. Technically one can see this clearly in the way that the kernel of macOS isn't responsible for most of the way that desktop functions today: this is left to user-space services that create higher level objects for the user to interact with, leaving the kernel to deal with the same sorts of objects you'd describe with the CLI.

Or something like that.


> There really is a physical object corresponding to the pipe, and the "|" refers directly if tersely to that object.

By that argument, there really is a physical object corresponding to the desktop. That, too, is made up of pieces of memory.

> It consists of an actual physical medium with various attributes

Not necessarily. I mean, yes, ultimately it does because we live in a physical reality. But ultimately even I start think about a filesystem in my head it exists on a physical medium, my brain.

Are the trees that modern filesystem usually consist of nowadays actual trees growing in my computer, or are they metaphors? Named after their biological counterpart because they roughly look and act like them in very specific aspects? Like a desktop, or a window?

Are the "folders" or "directories" that you interact with in your shell actually those pieces of objects, or are they a metaphor?

> The CLI is not a metaphor - it's an abstraction. It removes details that you don't need to know about (mostly), but provides you with a way to operate directly upon the objects (concepts)

I don't see the distinction. The desktop provides you with a way to operate directly upon the objects/concepts of the OS. In both cases, there is usually a multitude of abstractions before you reach any actual physical object.

> the way that the kernel of macOS isn't responsible for most of the way that desktop functions today [...] leaving the kernel to deal with the same sorts of objects you'd describe with the CLI

You mean the same way the kernel is not responsible for the way the CLI functions?

Why is an open() system call (which can refer to numerous virtual, abstract, immaterial things) more "real" when the piece of memory it has been passed has been collected through a multitude of abstractions and maybe originated from a sequence of keystrokes on a keyboard via USB, than if it had been collected through a multitude of abstractions and maybe originated from a sequence of mouse movements via USB?


> There really is a physical object corresponding to the pipe

If you open up your computer and get out your microscope, you're not going to find the pipe.

Computers are metaphors on top of metaphors. There's nothing wrong with that but you have to go way way down the abstraction tree before you are dealing with anything "physical"

> The CLI is not a metaphor - it's an abstraction.

I'm unconvinced there is a difference (other than abstractions being hardcore)


As I understand it, the difference is that an abstraction is a sign for a computational object. A metaphor is a sign for something else, outside of the computer, and only indirectly related to the computational object it represents.


"An abstraction is a sign for a computational object."

Writing that on my wall.

Over the past year, I have revisited the SICP [0] and the audio recording of a one-week short course that Hal Abelson and Gerry Sussman taught for HP [1]. You nailed it.

[0]: "The Structure and Interpretation of Computer Programs", https://en.m.wikipedia.org/wiki/Structure_and_Interpretation...

(see also https://github.com/sarabander/sicp )

[1]: http://cabezal.com/sicp/


Doesn't that just punt the question to, "what is a computational object"?

The fundamental abstraction/metaphor of the CLI is the file (especially in unix), where for a graphical system it is the "window". But the window has always been a weak metaphor. Windows are nothing like the thing outside the computer. Nobody understands them to be metaphors to physical windows. They are much closer to the computational object they represent (an area of i/o) than they are to a real window. Files on the other hand, are like their physical non computer counterparts. They are in fact much closer to real files than they are to a filesystem that's divided up into sectors or whatnot and distributed in the disk.

Does that mean from this definition, cli's are the metaphors and window systems are the abstraction?


A window is full of physical metaphors, like buttons, draggable handles/bars, tabs/paging.

But yes, it is also an abstraction of a set of related functions that manipulate or show data. It is even necessarily an abstraction of the computational objects it represents. But the sign(s) of the window points to physical/metaphorical objects.

Now the name "file" is also a metaphor. It represents (abstracts) a block of physical memory. But "file" is (or was) a sign for a physical thing made out of paper.

The metaphor of a file is on one hand useful, as it helps to understand physical memory as a set of objects we relate to in the outside world.

However it is also misleading: A real world file is typically immutable, to a high practical degree at least. We usually don't change files outside of correcting mistakes. We just add them and put a date. The file is first in "working" mode, then it is "done" quasi forever. To achieve the same with the computational object we need to impose constraints and/or discipline.


Thank you! Put very precisely and, IMO, accurately.


Abstractions aren’t illusory, they are definitions that ideally make things easier to reason about.

A pipe in the Linux kernel is a real thing. The name is metaphorical because it’s supposed to evoke the image of things flowing through it but that’s where it ends. A Linux pipe makes no effort to pretend that it behaves like a physical pipe. But it does abstract the implementation details about how data is sent to and retrieved from it.

An email program where the user is shown sheets of paper and envelopes is an illusion. The underlying implementation now has to modify its behavior to fit the physical properties of paper and envelopes to some degree. Letters only have one destination, CCing and BCCing now means copying and sending bundles.


So do emails themselves. It's in the name: it's "electronic mail". The underlying implementation has modified its behavior to fit the physical properties of how letters usually work to some degree. You think an email is something different than a text message, or a Slack message, or even a website or a video, when it is information that reached you by electrons moving a certain way in a wire. Or multiple wires. Or electromagnetic fields in space. Or photons in strands of glas or plastic.

It's all abstract, it's all metaphors.


> If you open up your computer and get out your microscope, you're not going to find the pipe.

You should be able to, though. The pipe has a (OS) memory location, which, after a bunch of redirections, is an absolute location in memory, which is a bunch of capacitors and transistors. Now, you could argue that those redirections are akin to metaphors, though.


Or I could argue that I could ultimate find the capacitors and transistors for anything else we're talking about here.

Except if they're just transistors because the representation of the pipe is sitting in static RAM for some reason, or in the fluctuation of a magnetic field. Or all of the above at the same time (caches, page files). Which one is the "real", non-metaphorical representation of the pipe? The one in the cache because it's being worked on by the CPU? The one in memory because its lifetime is longer?


> Computers are metaphors on top of metaphors

This may be true of our perception of “real life” too, of course, in which case computer interfaces are not something different, just an extra couple of layers


Your first paragraph is literally an argument that pipes are, in fact, a metaphor.


Agreed, though perhaps you could say it's more "real" than the GUI because you're dealing with lower level concepts.

We touch on metaphors in this section: https://clig.dev/#conversation-as-the-norm


I haven’t read your piece yet, but I searched the page and didn’t find any mention of DEC VMS’s DCL, or Symbolics Genera’s Dynamic Listener.

If you’re talking about how to design a command line for usability, you really need to become familiar with them, as usability and extensibility were explicit design points and they both work very differently than (and much better than) even modern UNIX.

They should have come up in whatever literature search you did prior to writing your piece. At least you can easily investigate both of them via preserved documentation and emulators, if you don’t have access to hardware on which to run them: There’s an x86-64 build of OpenGenera floating around you can run on Linux, and you can run VMS in SIMH. And Kalman Reti did a great demo of Genera on YouTube.


I haven't read the piece either and I also don't know what any of those acronyms are all about but how is "I didn't read this but I CTRL+F'd for terms I believe to be important and they weren't there so what are you even doing" a useful comment to make?


The comment is “these concepts and artifacts are table stakes for any discussion on this matter, and they seem to be missing”, which in my view is a valuable contribution.


Seemed like a fine comment. I now know there are people who think those mentioned technologies are important enough to require mention in this sort of write-up. I’m now curious, and will look them up.


That “Cli Guideline” seems to be written by people who are enarmored with terrible shells like Bash.


Good web design and content. This is useful. It's mostly basics but even those can be very helpful to have on a single page. Thanks for the resource!


I think that using a bookcase that can be spatially ordered (in 2 dimensions) would be a really interesting concept for an operating system.

Applications and files alike, filed on the same bookshelf. A very easy metaphor that makes it easy to explain how computers work, but also locate files that you use often (perhaps by size, shape, colour and location) without using the part of your brain that processes language.

Humans are spatial creatures, so I'd like to think that perhaps everything being in lists doesn't make sense and that's why we hate using them to find something.

The traditional notion of the Desktop is a bit like this, but the presentation is messy. There's no nice way to order your desktop, and everything is the same size and shape. Despite this, many people work solely from their Desktop.

Edit: It would be very interesting to have a check in/check out system where you can drag files on the shelf to your "working box", or check them out, or whatever. Basically the equivalent of your desktop. This gives you fast easy access to files from a variety of locations for whatever job you're doing. When you're done working with them, you can check them out, and poof they go back to wherever you got them from. This is an awesome physical metaphor to a library where the clerk does all the work for you in returning the books. This box could also give you a good metaphor for moving files, and cut/copy/paste. Move to box -> put back on the shelf elsewhere. Or, move to box -> duplicate -> put copies back on the shelf and send the originals back.


I will say that the Amazon Fire tablets (used to?) do this, and my non-technical family/friends would often come to me for help because they didn't know how to do or find what they needed.

As a side note, this caused me to throw Ubuntu on a desktop/laptop for a few of the people who needed the most support, and within a month they didn't need any help and were trying to get me to install it on their friend's machines.

Seriously, I was constantly being asked by old ladies to start an underground Ubuntu support network. They just needed a way to browse the web, upload photos to facebook, and play farmville. That was it, that's all they wanted to use it for.

If I were to do it again today I'd use ChromeOS probably.


I think it boils down to what experience a person had before and what they need to do. Anyone who played 3d games has better chances to have spatial and visual investigatory abilities (and benefit from it), and anyone who worked in business knows that your files have to be sorted and hierarchy’ed. When needs do not meet experience, they want a single always-on-screen button to start the part of a system they’re familiar with.


There's been two major attempts to leverage spatiality: spatial file managers (most controversially, 2.6-era GNOME) and zooming interfaces. None have such a checkout by default, but it would fit both quite naturally.

I don't have much experience with the zooming stuff, but I did use Nautilus heavily then (see also Siracusa's Mac-focused discussion here: https://archive.arstechnica.com/paedia/f/finder/finder-1.htm). As soon as you got used to it, it was amazing -- accessing files felt natural in a way it just doesn't in other systems. It was both digital and leveraged our natural understanding that things are at particular places. It's no surprise people still miss it.

A few people, of course. A very large majority of people absolutely hated it, and it's widely considered a huge mistake now. Familiarity and habits win out, users hate changing paradigms and unlearning habits.


Dr. Gelernter and Freedman proposed time as a better metaphor than space almost 25 years ago. It's essentially what every social media app uses now. Although the dissertation proposes better tooling for navigation.

http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf


Didn't the GNOME project try this in the mid 2000's? For me, that was peak GNOME: the 2.x series.


Yes, Gnome 2.x was very usable. Have you tried MATE? It's just Gnome 2 continued.


What does this do that the "sitting at my desk" analogy doesn't?


Funny how all those visions (like the Starfire movie in the link, but certainly similar and even older ones, such as Xerox' promo for the Star or Apple's promo for Lisa) are almost always about some unspecified middle manager working either alone from home or in a large, private office.

Imagine an open floor plan with a hundred programmers or administrators shouting code and commands at their giant, flashing 52" screens all day, having to raise their arm in some ergonomically counter-indicative way each time they need to swipe a pop-up ad away from their vision.

I know what a general purpose computer is. I know how to control it. Without a reasonable interface to do so, being completely at the mercy of some "UI visionary" (probably the kind who decided we shouldn't be able to change the default fonts in programs made for text messaging), it'll be even more useless to me than most modern UI:s already make it.


"General Magic: The Most Influential Silicon Valley Company No One Has Ever Heard Of" is a 2018 documentary about General Magic, the company behind the Magic Cap OS.

You can stream the documentary for free on Kanopy (with a library card from a participating library) here:

https://www.kanopy.com/product/general-magic


On other OS designs: “ They are navigationally cumbersome, asking users to go to the "other end of town" to pick up their email from the Post Office” Made me laugh, it seems silly now, but that’s a pretty creative way of describing what’s going on.


Sounds like the village that Microsoft Bob's house was in.

I don't actually think the ideas of things like Bob were bad, though, I just think they were cumbersome. Using the post office example, a top down map would solve everything, and be close enough a "start menu" that a totally fresh user might feel confident trying a computer without balancing wheels.


"Might as well check my Thunderbird mail on the way to open Firefox in Mozillatown!" :)


When I was a kid, I remember the Max my family owned have several different UIs you could enter in to. I think maybe the idea was that some of them were more kid friendly? When I saw those pictures in the article it took me back to that time.


You’re probably thinking of “At Ease”! https://en.m.wikipedia.org/wiki/At_Ease


I was always intrigued by the Starfire project mentioned here. The concept video from Sun is definitely peak 90s, but some of the things inside have finally come true 20 years later.

https://www.youtube.com/watch?v=w9OKcKisUrY


And Starfire was an updated take on Apple's Knowledge Navigator (1987):

https://youtu.be/umJsITGzXd0

All of these visions feel like they stem from TMoAD. Does anyone make inspirational concept films for industry/academia/etc like these anymore?




That Consulier GTP is hot though


Take the team to read this one, especially the later section describing the Anti-Mac interface. It's incredibly prescient and basically describes the direction of modern desktop GUI + cloud + voice controlled assistants.

The article might be a bit aggressive on suggesting that language _replace_ icons, rather than augment them - but it seems just as likely that we're still in the middle of the transition and assistants being built from the ground up without icon-based UIs is exactly what the article is predicting.


Yeah, I scoffed at first reading it but by the end I was impressed. Definitely agree with the point that metaphors are less relevant in a time where many people know how to use computers. A virtual assistant with the power of a command line could be very useful (though you'd probably want to double check their work before executing it.)


All metaphors are learned. Given that technology tend to replace things from the manual physical world these metaphors makes a lot of sense as they help humans transition.

Over time new concepts emerge which are based on metaphors native to the digital space and understood by the new generations.


> All metaphors are learned.

The really good ones are learned early and are highly conserved across time and culture.


Digression risk: Unless we count human evolution as “learning” I’d disagree: some “metaphors” are surely innate. A snake is scary to babies who have not learned it’s dangerous. Symmetry is beautiful to an untrained eye. Design should be informed also by our innate traits; refrain from taking the “blank slate” idea at face value.


Evolution is one long learning game :)


I recall reading this and some related papers a while ago, and for the fun of it I tried my hand at mocking up a UI based on the ideas laid out. The basic premise was that the application menu was replaced with a "new document" menu (like Google Drive has) that would take you to the relevant place to get things done automatically. Standard apps could still be run, but like you might go "new>text document>template" you'd go "new>app>terminal" for example. The File Edit View menus were also gone, replaced with a search function much like the Unity Desktop HUD where you just type what you want rather than drill through menus and point at what you want. Window management would be mainly tiling except for dialog boxes related to the programme that made them, but ideally the search function would handle most cases where you need a dialog box, employing a simple command parser reminiscent of Zork and the like, where you would just type out what you want to be done in mostly natural language and the agent responsible for the programme will automate as much as possible. Finally, window switching and virtual desktops would be handled by an exposé similar to GNOME 3 Activities or macOS Mission Control. While I don't think its the best approach, a little bit of intelligent arrangement of windows so that maximised windows move to a new workspace, remembering what apps are opened together on the same workspace, and encouraging the user to keep to about three or four tiled windows per workspace just for the benefit of keeping clutter low, and I think it could be easily managed without too much effort - and some keyboard combos or trackpad gestures would go a long way to quickly switching workspaces.


Oh, feel free to run with any ideas of mine if they strike a chord. I'm not competent enough to tackle making a whole DE just to satisfy my curiosity at this moment.


Most anti mac interfaces are actually bound to the same constraints a mac is. We don't have good enough signal processing to do pure gaze based control systems. If we did, it might be worth not reinventing the mouse, that much I agree. If we don't have gaze directed into the computer it's move and click. If we do have gaze, we can discuss if blinking is clicking or if we are going to make a richer paradigm of action directed outcome.


A somewhat historic article, but I think the comparison is off and dated too because the "Mac-ness" depends on the people's expectation of the day, which keeps changing. I tend to think that as more people are getting tech savvy they'll gradually move away from the classic physical metaphors and accept abstract concepts (e.g. "URL", "mention", "optional arguments") as natural.


> At recent user interface conferences, several speakers have lamented that the human interface is stuck. We seem to have settled on the WIMP (windows, icons, menus, pointer) model, and there is very little real innovation in interface design anymore.

Ooof, and here we are, 25 years later, with an intervening opportunity to completely redefine HCI (smartphones), and it's still basically all WIMP.


Weird, my hand computer only has icons out of those 4.


I feel many touch interfaces are still fundamentally WIMP. You can only use one, maybe two apps at a time, but they are still windows - just tiled, not floating, and always accessed from a list, not a stack. Icons are obviously still a thing, but I think we hit the nail on the head with those as uniquely shaped pictograms are easier to parse quickly than descriptive text. Menus are often minimal, but they still exist to the extent of "ordered list of things to interact with", and the only thing I think that can really be argued we've got rid of is pointers - but from where I see it, I just think we've added nine more pointers.


Look again.


(1996)


Jakob Nielsen ages well. And people keep repeating the same mistakes over and over - that they could have learned to avoid in eg. Usability Engineering (1994).


That anti-mac table column near the bottom is eerily full of good predictions.


They seemed wooly and commonly predicted to me. The article was interesting though.



Don Norman was a respected UI analyst during his days at Apple. How far he has fallen! I lost faith in his judgment after reading the NN/g article bashing PDF:

https://www.nngroup.com/articles/pdf-unfit-for-human-consump...


22 References and like 5 pics. TFA was about UI.


> Cray-on-a-chip RISC processors




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: