Hacker News new | past | comments | ask | show | jobs | submit login
The Finder’s GUI tax can be very expensive (robservatory.com)
190 points by hyperpape on Dec 8, 2016 | hide | past | favorite | 121 comments



I wonder whether this is not an example of a UI interaction which is slow on purpose, to emphasize how much work the OS is doing for you.

TaxAct and TurboTax, for example, both operate on (in the typical case) kilobyte scale data requiring trivial math. They also make both saving the data and calculating taxes take 5+ wall-clock seconds when they actually require milliseconds and nanoseconds respectively. This is largely because (non-technical) users don't trust your computer did the math right if the answer comes back instantly. (I also suspect there is an element of "Wait if it is so easy to calculate my taxes why am I paying you to do that.")


This reminds me of how some PlayStation game's developers complained that they were required by Sony to add a false multi-second delay to their game's save routine, so the user would have enough time to read the mandatory warning not to turn off the console. Their game in fact saved so quickly there needn't be a warning at all.


This is really interesting. I read that warning every time I save and I never noticed the delay. For some reason, it doesn't bother me like the archive utility animations. I guess it's because saving the game is a such ritualized action.

I wonder what other requirements Sony imposed on developers. Maybe they have their own video game-oriented human interface guidelines. Do you have a source for this information? I found plenty of technical documents and SDKs but nothing about interfaces.


Kind of related, the reason there's a "Press A to play game" on xbox (and a lot of PC ports) is because Microsoft requires a button press every x seconds, and if the game takes too long to load before the menu appears, pressing A will circumvent that requirement.


N64 had press Start to start, years before XBox was even a thing.

And yes, many/most PC ports are of very low quality. Gone are the days of PC-first Triple-A games with forgotten features like Quick-save/load, LAN-multiplayer, Map-editor, official Mod-support. These days, these ports feel like they started porting two months before release, done by two coders. All these console-ish things like messages "don't turn off your system" (yah-right), press a key to start (yup), save-points (wtf), always-on single player (...)


The N64 in turn took the start screen thing from the Super Famicom and Famicom before it.

I assume title screens are just that: title screens, and the developers want to show off. Maybe it's a legacy of arcade machines, though.


PC games never had a title screens, until the first bad console ports around 2003 came along.

Instead of a title screen, PC games used to go to the main menu directly.

Even today, some console games like GTA V don't have a title screen, but many smaller games feature a useless title screen.


My copy of Alley Cat from 1983 disagrees with you.


Tons of PC games have had title screens going way back. Ultima IV (1985) and Commander Keen (1990) spring to mind.


What about DOOM (1992)? That had a title screen of sorts.


The other reason behind "Press A to play" is that a console can have several controllers associated with it. Pressing a button at the title lets the game know which controller the user is holding.


Wouldn't it be P1/2/3/4?


Only in the case of wired controllers.


>Microsoft requires a button press every x seconds

What is the rationale behind such a requirement?

>if the game takes too long to load before the menu appears, pressing A will circumvent that requirement.

Does this mean those screens/videos that get displayed after the game starts and before the menu appears are required by Microsoft to be skippable?

If that's the case then it's good UX design in my opinion. I think those things are an unacceptable waste of player's time; I hate them so much I rename the files in the game's directory so it will fail to show them.


Presumably so they can see how long it takes to handle input events, because that's a good proxy for whether the application is responsive or not. What they don't want is the game freezing. But that's harder to measure directly.


It doesn't mean they have to be skippable, just that they can't drag on too long.


Can only speak for PlayStation 2 games (I also worked on PS3 and XBox 360 but didn't handle that part). Sony's Technical Requirements were a pain. We had to build a freaking complicated State machine to handle all save scenarios (PS2 still used those external things to save games, you had to handle the user taking it out midsave, put back, etc). We had a game rejected for having a SINGLE space between PlayStation and that registered trademark symbol.


I'm never actually sure if I actually saved in the PC port of Valkyria Chronicles. I think we've been trained to wait a while to save since the 8 bit consoles so something feels a bit wrong when things instantly save. Undertale makes amazing use of this, though.


There was a time, when F5 key for Quick-save was common in PC games.

Together with Quick-load, it worked very fast, no need for info-screens, it just worked in milli-seconds and allowed gameplay styles completely forgotten or newer experienced by console gamers who are keen to their save-point system.


Baldur's Gate best feature. Allows you to test all variations of the game, every encounter, so much potential, so much possibilty you could just explore, rewind, explore, rewind.


One of the best, and most under-utilised features of Half-Life, to name one of many. Made clearing a room an entertaining exercise in how few crowbars/bullets/rockets can you use...


quicksave/quickload encourages bad game design. Look at games with save anywhere abilities that include elements of chance like Skyrim and Fallout. So you are able to pickpocket and use charm in these games but because you can save anywhere does anyone ever live with the chance these things fail?

Of course not they just try>reload>try>reload>try>succeed. Making that whole skill set pointless and the mechanic pointless


not offering quick-save is bad game design. a game should be about fun. nowadays it's often more about work (boring grinding gameplay, save-points were you have to play the same passage several times again... oh what great game design - not). with current gen-consoles there is no single technical reason not offer quick-save & load. and for driving games, the quick-playback feature introduced by Codemaster in GRID should nowadays be common, but sadly is not.


aka save scumming


No kidding. In one of my previous companies we explicitly added a delay in our financial webapp. We saw a significant improvement in bounce rate after that.


Hahaha, I remember coding some physics problem solvers in Visual Basic back in high school, and I added a "calculating" progress bar and messed with the timing a lot to make it still pleasant to use but seem like it was really crunching through your problem. "Crunching" is even funny and a throwback to the days of spinning rust + lots of random seeks.


I've been asked to blank out a screen in a single page web application and re-render after 250ms when people switch 'folders'. My boss thought it was a familiar user interface pattern to draw attention to the page content. This is... not the silliest thing I have done professionally.


No, it's about showing it long enough so you know what the dialog is and what it's doing.

If a dialog popped up for the brief amount of time it actually takes to unzip, it might freak out users: "huh? WTF was that??"


The Finder already handles this differently for file copy operations - you don't get a file copy progress dialog if it takes less time than you need to react (just an audio confirmation). It seems like Archive Utility didn't get that extra attention though.


Maybe it's a minor security feature. Unzipping something in the background might be something they want to always call attention to.


Maybe a new type of dialog could help? Rather than showing this blinking things, just show a notification of style "24 archives extracted" (kind of like what forklift does)


Then just don't show a dialog.


Then users wonder "Did it actually save?"


So just pop a click-anywhere "Saved!" notification.


Then users will keep clicking on save until that notification comes, when your save takes just a bit longer than usual.


By hypothesis (see https://news.ycombinator.com/item?id=13135385), the operation is effectively instant. If it weren't, a UI on its own thread could notice that and act accordingly.


We were talking about unzipping. If successful, the unzipped contents appear next to the archive.


I have been made to do this, I called the function wasteMyTime() and all it did was waste the user's time for a few seconds with a spinner. Still a little bitter about it.


That's highly unlikely. Archive Utility is just slow.


I think you're probably right. I'd imagine that most of the code in Archive Utility.app is very old, and there's very little business reason to spend any engineering time on improving it beyond doing just enough work to make sure it works with API changes for later OS releases.

There are third party GUI compression utilities that are faster that you'd find if you actually care about how long it takes to decompress your archives, and the CLI utilities are all there out of the box for people who don't need a GUI.


If it's really old, shouldn't it be faster as it would've been designed to perform adequately on way slower CPUs? ;)


No? There are a dozen trivial reasons you can easily think of why that wouldn't be the case.


Newer means different. Older means different. Different CPU meet differently optimized code meet different cocoa framework. Narf!


It's highly unlikely that it's 1000 times slower than the command line utility to decompress, with which they share the same libs anyway...


It isn't because they don't do anywhere close to the same thing. Watch the video:

http://robservatory.com/postimages/unzipping/finder_v_termin...

Where do you think the people who wrote the thing that happens on the left inserted a gratuitous sleep?


For each file, a split second to animate the apparition of the window (or the increase of the existing window), another split second to show the empty progress bar, a millisecond to fork the decompression process, a couple of frames at 60 Hz to animate it being filled, and another split second to animate the disappearance of the window (or the decrease of the existing window).


None of these are deliberate idling to give the impression of work which is what the original comment was about.


Huh? ALL of these are deliberate idle time (idle as in "not spend in actually decompressing) to give the impression of work ...


By that (deeply silly) logic, 'having a GUI' is 'deliberate idle time'.


No. You'll notice that your browser doesn't sequentially wait for each character to show up on the screen before rendering the next character.

The obvious thing to have done would be to have a single progress bar for all decompression operations requested in a single UI action. Darwin has had a process scheduler for some time now; they can be expected to make good use of it.

It is infuriating to be waiting for animations to end for literally 12 seconds.


Again, this has absolutely nothing to do with the original and inaccurate idea that Archive Utility waits around doing nothing on purpose to give you the impression it's doing something while it's doing nothing. It just means AU is implemented in a way that is slow and inefficient. Which is what I said so hey, we agree!


obviously when they show the notification for each completed unit?


What is highly unlikely about that? That seems more plausible than, zlib is magically 1000x slower when used by Archive Utility.


What seems more likely to me is that the "decompressing a large number of small archives" task is so rare that Apple hasn't optimised it. More likely than "Apple added an arbitrary delay to make the interface more satisfying."


It's unlikely it's on purpose. It makes sense in something like tax software you use once a year. It makes no sense at all when doing a bunch of i/o bound operations which are already potentially slow.


Yes I've added delays in many web apps, even loader gifs to show fake hard work


I don't think that's the reason. It's more of an edge case due to some minimum animation timing.

If you were expanding 24x 100MB, or 1GB files, the overhead of animation would be much smaller relative to the total time. And it would be quite an informative GUI showing the progress of each file. While the command line version may sit there for a long time before returning to prompt.


Don Norman talks about that example of intentional delays in some tax software here: https://vimeo.com/96714148 (approx 52:30).

It sounds like he and his team came up with the idea in that instance.


Mercifully, in those tax applications you can often just click the "Next Step" button to skip the useless "loading bars". I laughed out loud when I discovered this.


If someone is looking for an alternative with a GUI, I use The Unarchiver [0] and I'm generally happy with it. I don't usually have to expand lots of small files like the OP, so I don't know how it compares with that task.

[0] http://unarchiver.c3.cx/unarchiver


Seconded. The Unarchiver is always one of the first things I install on a new Mac.

The same author is also responsible for another program, Xee, which is a great, no nonsense image viewer. The 3.x version costs a few bucks, but the last 2.x version can still be downloaded for free.


The unarhiver also handles .rar files, whih archive utility can't do.


> a window with a single progress bar for the entire task would be OK, but would still slow operations down.

There's a simple way to do a visual progress bar with almost zero slowdown: run the task in a separate thread from the progress bar, and update the progress through lock-free shared variables. Make the progress bar read the shared variables only a couple of times per second, sleeping between its updates.


It's not only the communication between the task and the GUI that is slowing things down (if it is at all, which I doubt). Even more, I think, it's the fact that macOS, the OS in question, has some conventions for windows, controls and so forth, one being that it takes X seconds for the window to appear and X seconds for it to disappear. If it was being shown and hidden again as fast as the task starts and finishes, it would be just be a flicker on the screen, which would confuse the user. It also doesn't make sense to spend a second to show a window, if the progress bar it embeds has completed before the second has elapsed, so there has to be a delay there, and so forth. The task is so quick it can really not be presented to the user without the presentation being slower than the actual task.

I think it comes down to good UI+UX, which in this particular case becomes bad UI+UX.


If they're simply following UX recommendations blindly, they could go for this one:

"Use a progress indicator for any action that takes longer than about 1.0 second."

https://www.nngroup.com/articles/progress-indicators/

You don't necessarily know in advance whether it'll take 1 second, but I think a strategy that would work is: after 1 second, if the task is not done, open a progress bar, and if the task completes immediately after that, display the completed progress bar for 500 milliseconds or something before closing it.


If it was being shown and hidden again as fast as the task starts and finishes, it would be just be a flicker on the screen, which would confuse the user.

Maybe the really inexperienced users, but it makes sense that something which happens quickly, will mean a window that also appears and disappears quickly. This is why Windows has options to disable animations and other effects, and everything does feel noticeably more responsive when they are disabled --- windows and menus appear and disappear instantly.


I think an 'in-line' style of alert can work well in this situation, like a status bar but closeable. It can be dealt with at a time most convenient to the use, still provides feedback. It could even just display the output of a CLI command with a given flag, even for a small subset of 'blessed' commands if necessary.


How did they not title this "Finders Fee"


Hah.


Heh.


I've encountered this in command-line programs as well. Print statements take a non-zero amount of time and depending on the language, can be quite slow, so for an otherwise-fast operation in a loop, if it prints a line every time then it can contribute significantly to the overall speed of the program. Removing the print statements in some cases can speed up the code by an order of magnitude.


quiet reminder that writing to file descriptors is blocking.

STDERR and STDOUT are file descriptors.


Yep, and this is very noticeable if your using windows and select any portion of the terminal. All stdout/stderr is blocked.

I had a call when one of our services stopped responding. Took a while to figure out there was a tiny one-character selection in the power shell.


I often wonder if its why it took them over a decade (two decades?) to enable QuickEdit mode by default. Sometimes I'll wonder why something is taking so long and realize that my palm accidentally caused a quickedit selection in cmd.exe.


Maybe so, but, as pointed out elsewhere in the thread, printing to stdout is much, much, much, much slower than printing to a file. The fact that stdout is a file descriptor can only explain a vanishingly small quantity of the time cost of using it.


It's a combination of many things. You're probably writing to a file buffered, but terminal writes are generally unbuffered. In addition, I had a terminal setup once that just did a poor job of displaying text (line wrapping was slow, perhaps) and programs that would write out to stdout would be slow.


> printing to stdout is much, much, much, much slower than printing to a file

False dichotomy. stdout can very well be a file. What you mean is that printing to a terminal is slower than to a file.


True enough.


2>&1>/dev/null


blocking. blocking. blocking. blocking. rtfm


Again, the problem is not that writing to file descriptors is blocking. The problem is that writing to the screen is very slow. Writing to a file is just as blocking, but you'll block for much less time, because writing to a file is not slow.


Avoid verbose flags on tar.


rsync as well, especially the progress bars.


Reminds of of this Raymond Chen post:

https://blogs.msdn.microsoft.com/oldnewthing/20060220-00/?p=...

A Windows program ran faster if you clicked and held the mouse button on the title bar (as if you were going to move the window) because it would stop the window from re-drawing itself, making whatever loop it was running (and constantly updating the GUI) go faster.


If we are assuming the underlying expansion is as efficient then the cost comes either in the boilerplate or mistakes made when parallelising. In the case of the boilerplate the solution would be embedding some more of the linked code and reducing IPC/other unnecessary bottlenecks. However I suspect the error is in the parallelisation, which is not surprising, especially as the authors likely optimised and tested for large, not small, files.

As it is, removing the GUI is perhaps the worst thing you could do to the user. Closely followed by inducing epilepsy with that ridiculous expanding dialog.


"Finder (nee Archive Utility) should just execute the task without any visual feedback" -- no.


I agree there should be visual feedback, but maybe more of a compromise. E.g. bundle all unzip tasks into 1 single progress bar and rotate text strings for informing which task is underway.

The "dancing" modal is quite ridiculous.


If operation takes more than human's average reaction time, progress should be shown. Usually 250ms. Jef Raskin and others wrote a bookshelf about that a decade ago, but I still see strange solutions everywhere.


In this case it's showing a 24 progress bars each for an item that takes less than 1ms.


This is basically what happens in third-party tools. For example if you use 7-Zip on Windows to unzip multiple archives, there is only one progress window and progress bar, and it will display the names of each archive being extracted one after another.


I also thought that was wrong. One idea is a progress window if the task takes more than a few seconds, plus some visual indicator for success.


This isn't a dichotomy. It's possible to make a fast-performing GUI for this task. The GUI is slow due to poor engineering.


That's true, but I'd argue that a window that appears and disappears within substantially less than one second is often a bad idea anyway. Progress indicators are for things you wait on.


Apple disagrees - the Finder already handles this without any visual feedback for file copy operations - you don't get a file copy progress dialog if it takes less time than you need to react (just an audio confirmation).


J Nielsen would disagree with you. A subsecond operation could very well operate without visual feedback and only after that start showing the progress. Does finder have a way to show messages in status bar or some such? For quick ops it would be ideal to say, "operation completed".


Windows explorer unzip is even worse. Recently in a VM I tried unzipping boost 1.62.0 using explorer in Windows 10. It said it would take over 3 hours. The same file was decompressed on the command line in about a minute using 7zip on the same VM.


Windows also always unzips to a fixed scratch directory before creating the real files.

That scratch directory is %TMP% or %TEMP%, I think, but it definitely always is on the same disk, typically C:

If you unzip files to another disk than where that directory lives, it unzips to its scratch directory, then _copies_ the extracted files to their destination, and finally deletes the scratch files.

That's bad in itself, but doubly so if the disk where it unpacks doesn't have room for the unzipped files.

Mac OS had a system call "Give me the temp directory on this disk" (FindFolder) that made it possible to extract to a temp directory and then move the result to the destination. That call made it into Carbon. I don't think macOS still has it, but it seems UnArchiver does the right thing, anyways.


Internet explorer did the same thing with downloads; I discovered this when I had a 1GB C: drive and a 6GB D: drive and tried to download something that was too large to fit on the C: drive.


For the longest time, I searched for a GUI archiver that didn't use temp directories at all.


For a tool that's supposed to be on most desktops by default, Explorer has some strange inefficiencies, such as the estimation and the loooong "preparing to copy" phase. Perhaps they are carried over from the old Windows NT codebase?


Of course they are. MS is biggest visual theme seller on the planet.


Reminds me of this XKCD haha https://xkcd.com/612/


It's fixed now, but for a while it was significantly (20 to 40%) faster to install npm without the progress bar.

https://github.com/npm/npm/issues/11283


Just to clarify, that's 'npm install' not 'install npm'.


The window animations in Mac OS itself can make things feel very sluggish as well, although the velocity-based ones seem better with Sierra.


There's a point where printing progress is actually a huge bottleneck to performance. Try it out in python. Make a for loop that loops a million times and prints "Hello". Run the program once normally, and a second time piping the output to a file. You'll find the first one takes 15 seconds, while the second one is near instantaneous.


This is a false equivalence. As long as the progress is set correctly via messaging (queue) to an alternate UI thread, the time lost will be under a millsecond. The naive print loop you provide is obviously IO bound and print is a blocking IO call. Use an async print IO and youll have much better performance. Narf!!


They should make the UI idiom feeding your file a mushroom and then double the icon size on completion.


only if they want to get sued by Nintendo...


Nintendo got it from Alice in Wonderland, so perhaps prior art?


The rationalizations are kind of silly... oh no, the uninformed users might otherwise do or think this or that; actually, this is shit workmanship, that is all it is, and it's up to the people who have the faintest clue to point at it an laugh until it becomes better workmanship. If you use energy you should use attacking it on defending it, you're now part of it.


Even worse, IMO, is that the default behaviour for Archive Utility is to just unpack the contents of a zip file in a directory under which it's found. Sometimes I just want one file out of a huge zip... Actually, quite often I want to see what's in a zip before uncompressing.


Isn't there some disk caching going on here? Surely the OS is not writing directly to disk immediately after gunzipping. I don't think that's enough to explain the large difference, though.


Great example of someone justifying engineer driven design over user driven design. Not understanding at all why it was made that way in the first place.


"... slow, it’s the GUI interface to the computer ..."

he ... he wrote GUI interface. heh.


Just want to add, `unzip` from your terminal window can be used as well. If opening those gz's every month is a pain, you could handle it with something like: `unzip -r my_stuff_{1..99}.zip`


The files in the article aren't zips, they're individually compressed files. Thus, unzip won't work.


This will be sort of bluntly stated, but I challenge anyone to show me an example of where animation is a usability improvement over no animation at all. I understand how it may be visually pleasing, but I work on computers, not enjoy it as an art form. I do some UI design too and don't remember the last time I added animation. Can anyone find an exception?


Most transition effects where things are added or removed from a group of objects - without the visual cue, this can happen so quickly it's hard to see where the removal or addition happened. The tab bar in your browser is a good example.


Many animations are simply stylistic, but others convey information.

Any animation that gives you "almost there" information about a gesture; fro example, holding your finger on the screen in Minecraft PE starts an animated circle. When the circle completes, the action is complete. It provides extra information.

Ditto for the "swipe right to delete" animations in iOS mail; the animation lets you know which side of the "show options vs delete" wall you're on, and how close to the edge.


the rotation animation when turning your phone to landscape mode.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: