I have to pile onto the criticism of the hidden scroll bars, and other similar features (like the hidden buttons of PDF viewers now, such as in Chrome). Discoverability is a problem.
For anyone (cough, elderly parents) who aren't adept at discovering hidden features, these things can be utterly mind-boggling and frustrating. Even I was stumped for a good minute the first time trying to print/save/download a PDF when that "feature" came out.
I don't really need the small sliver of menu space in PDF view to be reclaimed -- and for what, a "clean" look? Those are real and important functions I desire. What I actually need is for news and blog sites to stop covering 1/4 of their vertical window space with hovering frames, ads, and banners asking me to subscribe. Which, by the way, subsequently don't properly calculate into that now hidden scroll bar's movement and cause you to overshoot the displayable area when paging down. End rant.
For anyone (cough, elderly parents) who aren't adept at discovering hidden features
You don't even have to be elderly.
The 20-somethings in the office (pre-pandemic) had no idea that on an inconsistent smattering of Apple apps, if you pull down, a hidden search bar magically appears.
How did that conversation go?
Dev: Where do you want the search bar to go?
Designer: Put it somewhere that no one will ever see it, or ever think to look for it.
Who thought that was a good idea?
I think people in the SV/HN bubble play the "what about the elderly?" card too often, because we're afraid to admit that we, too, don't know how stuff works anymore.
I couldn't agree more. I once had a really interesting conversation with an accessibility person who said in passing "accessibility is for everyone". He didn't make a big point of it, but it really stuck with me, and I'm always reminded of it when I see things like this. I've been working in the digital world since the late 90s, keep everything up to date, have a recent iPhone etc. etc. etc. and I'm stil occasionally stumped by things like how to turn on repeat in Apple Music on my phone.
Got a new android phone (i think it was huawei), couldnt answer calls. I was thinking I as always has to press or slide the green phone icon, but actually had to move the center grey icon on the green phone icon.
I was planning to reply to your parent comment that discoverability has always been an issue for me when I work with Apple devices. Whenever I use my wife's iPhone or Mac for a couple of minutes, I get frustrated because everything is all over the place, none of the apps work well together, and everything is hidden.
Back on Android, (or Windows, or Linux), it makes sense, whereas she loses her way. Just to point out, a part of discoverability I guess is familiarity with the underlying principles the UI follows as well.
To add two more points to this. I think there's one thing platforms in general got very right: simple touch controls. My one year old is discovering that sliding fingers across a screen, touching and pinching (the three basics) does stuff. Hidden menus, press-and-hold, and so on is secondary. Second, icons are something which is wrong. You need to guess their initial meaning, and translate badly in conversations, like recently with my mom: "press the rectangle with four arrows pointing out of it" (fullscreen or whatever it was).
"Here's an image, guess what is supposed to happen when you click it, also i will give you no info on hover over -- you must push me and then try to guess what i did!" (btw if there's a bug or a configuration state in the app that you or i am unaware of -- you might've pushed me at the wrong time and i might've let you do that)
This is a bad example. The floppy disk icon means "save" and has now for very nearly as long as floppy disk were in common use. It's the same as a "tape" icon means something to do with video even though video cassettes were only a common thing for 10 years.
The Apple II was the first computer that was widespread enough for people to have seen that had a floppy disk. It came out in 1977. Apple dropped the floppy disk in 1998 (21 years later), so it's been 22 years since it was standard in Macs.
Even allowing for some switching time, it has meant "save" for longer than it was a common physical object.
Awesome question. Replacing it would be worse of course (because then not even the old timers understand it). One thing i liked about google docs?? Was automatic save, maybe this is the trend. If you take a 5mb picture on your smartphone there is no question do you want to save it. Space is cheap.
But if i really have to come up with something, I'd say green arrow pointing up. Meaning 'upload to your (own private) cloud'
I appreciated the Windows Phone 7/8 and early Win10 approach to compact icons which was/is: Just have a button next to the icons that expands them to show labels when you press it. Unfortunately later redesigns to Win10 apps seemed to drift away from these patterns towards just having unlabeled icons everywhere.
Windows 10 introduced the navigation bar, which is essentially a vertical version of the Windows Phone application bar, just using a "three lines" icon as the "show labels" button instead of an ellipsis. https://docs.microsoft.com/en-us/windows/uwp/design/controls...
The start menu and inbox mail app use this control, for example.
> Just to point out, a part of discoverability I guess is familiarity with the underlying principles the UI follows as well.
This is one reason why UI patents are not a good idea. When different platforms are forced to adopt differing UI patterns because they can't do what their competitors are doing, you get the current mess.
Speaking of Android, I used my phone as a hotspot for over a year. When I dropped my phone and the screen broke, while I waited for a replacement screen to DIY, I was able to use ADB to navigate to the assistive tools that would screen read. With that I was able to toggle on and off some settings relade to keeping the hotspot on and churning.
Accessibility can help normal people in extraordinary situations too.
I had actually thought Apple might have removed the Repeat functionality when I upgraded iOS a few years ago; when someone else showed me how to do it--which involved scrolling the entire app to find some new hidden features--it felt more like a prank than real software :/.
Accessibility makes things easier not only for those who need it, but, somewhat surprisingly, also for quite a lot of people who wouldn’t normally think of themselves as even needing or wanting any help. This is usually referred to as the Curb Cut Effect, and has been extensively discussed.
With examples given like curb cuts for wheelchairs helping everyone, and OXO brand kitchen utensils meant for impaired hands just being better for everyone.
Pretty sure the Apple search bars used to be discoverable. When you first opened an app (like Messages), the search bar would be visible by default and you had to scroll down to hide it.
Over time, they started hiding it by default and require you to scroll up to access it.
I think that, at a general level, this principle of development is reasonable: things that used to require being extremely explicit can become more implicit over time as users become adapted to it. Computers used to feature tons of skeuomorphic design to make it obvious what everything was (so you could think about your computer with the same lens as you thought about your desk, for instance), and now we've mostly done away with that because the vast majority of users don't need it.
Where Apple fails in this regard is two things, I think:
1. They assume prior familiarity among groups who won't have it.
2. They do not leave any affordances to even suggest the presence of something hidden for those who won't know it (or have forgotten).
For the scroll bar issue, I think they could introduce an affordance in the form of, say, a couple small horizontal bars at the top of the page that kind of indicate "you can pull down on this". While early versions of this same design were pretty large and could be considered intrusive, I think users are familiar enough with the "tactile" touchscreen elements that we could develop a smaller version to use for these things.
---
Your point in general still stands, though. Apple has become more and more egregious about this over time. Things used to be very discoverable, and now there's tons of hidden functionality in most of their apps (both mobile and desktop). I think they've succumbed to more feature creep over the past couple of decades than they want to admit.
> so you could think about your computer with the same lens as you thought about your desk, for instance
I honestly never bought that, because I've never seen it in practice. Personal observation / anecdote time:
(I'll hesitantly be generalizing from myself & people I knew as a kid/teenager to entirety of my generation of non-English-speaking countries, but I feel it's justified.)
As a kid first discovering computers, I never got the connection between various skeuomorphic terms and their meatspace counterparts. Half of the time, I wouldn't even know what the word referred to outside computers! E.g. I was a proficient Windows user before I figured out that "desktop" is the top of your desk (or, in Polish, "pulpit" is something a very old school desk would have). I didn't know until many years later that "icons" are religious pictures. Or take window - the only connection between GUI windows (one or more rectangular frames in which an application is contained) and real windows (a rectangular frame you can see some part of the outside through, and that parents regularly ask you to clean) is the "rectangular frame" part. Might as well have called it a "frame"[0].
Point being: myself, and my family and friends, and (almost) everyone I came to physically know in my country - we've learned all these terms, in Polish and English, without understanding the skeuomorphism. We grokked the concepts through interaction and explanation of what happens on a computer ("programs draw stuff in 'windows', 'windows' can be resized, closed, etc..."). The terms could've been entirely invented words (like "foobar") for all they were worth - hell, they often enough were, from the POV of someone who sees an English word without knowing English.
--
[0] - And in fact it's how Lisp world referred to it before computers were available to general population. Emacs still retaining the nomenclature, which leads to no end of confusion for 21st century users.
Emacs user for 20+ years here. I have written a nontrivial amount of elisp. I like emacs; and, yet, I still sometimes get confused by the difference between a window, a frame, and a buffer. Sometimes I encounter a phrase like “frame buffer” and just ignore it to avoid going down that rabbit hole again.
Ah, I see. That's a fair point! I hadn't considered the difficulty of language choice when it comes to non-US computer users. Thank you for sharing your experience!
But I should clarify that I wasn't even thinking about terminology when I mentioned skeuomorphic design. I meant things like how the "trash" on the desktop looked like a physical trash can, how the "save" button looked like a floppy disk, how the calendar app was meant to look like a physical calendar, etc. The system was designed (visually, I should say) to remind people of their physical offices so they could more easily interact with a computer when they had no prior context for it.
But over time, we have moved away from these physical allusions because people have simply become familiar with computers.
Back when search bars were table view headers, you had to do non-trivial work to hide them by default (which Apple did anyways, because Apple). With iOS 11 search controllers can integrate directly into the navigation bar, and in that case the search bar will be collapsed by default.
Integrate directly? Like is there a magnifying glass icon or ANYTHING to make it discoverable? Or do I just have to guess I should scroll up in i.e. Settings? cough Thanks, Apple.
I’m 28, and I don’t know how to use most apps and websites anymore, and I’m a web developer. My intro to computers was when I was 6 years old, and I was trying to install games on Windows 95 in a language I didn’t understand(English). The consistency of the layouts was the only thing that made that work. So many of my childhood experiences with computers relied on consistent spatial representations. Kids seem to do just fine learning all these unique UI’s, but I can’t help but feel like there’s a loss of “general computing capability”, for lack of a better term?
A while ago I had a discussion about our UI with our designer, and when I mentioned that nothing seems to work as you’d expect, he said he wants people to learn a unique way of dealing with our interface.
It boggles my mind how that is something you could actively want.
As a designer, I'm ashamed of reading this. This is the opposite of what designers are supposed to do. UI is not art, you can't just do whatever you want. You need to abide by user patterns and conventions, because our job is to get people from A to B in the most efficient way possible. Period. Your designer is just bad.
But "designers" are surely understood to be engineers. This problem exists because the company creates positions for designers. If positions were created for usability engineers, we'd get a completely different outcome. (I wish repeating this over and over on social media would cause it to happen.)
Designers != Artists
Designers are supposed to create things that humans can relate to and use, the end-user should always be in mind. They do not just churn out art for its own sake and damn the beholder. So I don’t see the distinction here.
> They do not just churn out art for its own sake and damn the beholder.
But that's the point of this article. As a matter of empirical fact, they do. Visual appearance/attractiveness has dominated over usability since smart phones became our ordinary means of interaction.
(NB. artists never "churn out art for its own sake". They usually want to do something with that art. But to the extent that we allow usability to compete with visual appearance, it can be called "churning out art for its own sake".)
Unfortunately, our UX engineers are on the front-end team. And anyone working with React is just a code monkey, so if they suggest anything related to their area of expertise they cannot possibly be right.
Requiring users to learn the idiomatic way your software behaves in order to be productive may be warranted if your functionality is complex, heavily used and more or less unlike any existing software.
Or, your software is niche and its alternatives have such terrible UI that trying to do what the user expects is adding 3x clicks and actively hurting their productivity.
There can be legitimate cases for learning curve, especially if unlearning is involved. If so, the fact that your designer wants the user to learn is a very welcome position. If they make learning effortless and intuitive, you may end up with a solid piece of software.
I know what you mean. I had so many experiences as a young person figuring out my own and others computer problems but just sort of knowing where to click, and how many times, or how many seconds to hold the button.. resetting the screen resolution with just the keyboard when you couldn't see anything, getting out of crashed full screen windows with "alt+space down M move the mouse"..
> The consistency of the layouts was the only thing that made that work
This is something that we have lost completely with the web. Just having OK/Cancel buttons in the same place in the same order in evert dialog box was very important to developers at the time. Now it's all over the place
I remember helping a friend fix his Windows 98 setup. He had accidentally choosen and confirmed a resolution that his monitor would not accept. I told him the exact set of Tab and Down and Enter presses to get to the correct element in UI.
Actually, it didn't work because I told him to Shift-Tab to go from the starting Tab to the ending Tab in one go, except he had an extra Tab on his computer (I think due to a graphics card) at the end. If I had told him to Tab 5 times instead, it would have worked.
Learning something new requires more effort, because you have to unlearn other things. But it's not even clear what needs to be unlearned. So we founder.
Internally, we feel like we stopped learning. But I'm certain that if you were airdropped into a new domain, you'd be a voracious learner.
It's not learning new things that's necessarily bothersome to me. The problem is that it often feels arbitrary, and I can be sure that I'll have to re-learn all of that again in six months.
As @strogonoff pointed out elsewhere, there exist domains and niches where a complex and different UI is warranted. Scientific and creative software, for instance, falls into this category. Most software I use does not. I just want to change my account settings, or search for something without feeling lost. Personally, I'd like to see innovation in UX beyond hiding stuff, moving it around, or making it look a certain way. Whenever I see that some app I use has "revolutionized" their UI, the changes usually make me think of this:
That's a pretty good rant. Very Donald Norman. Know that I mostly hate rants about UI, ergonomics, design.
The follow up is pretty good too.
I've long had some notions about incorporating haptic and tactile stuff (I'm not saying "feedback") into the communication channels. Alas, I've never had the gumption to execute.
One very old notion was adding physical feedback into the mouse. A clicker/knocker, so the mouse would feel like it was passing over a bump. And one of those whirly things to make the mouse buzz.
Someone's actually done the physical part. But I don't think anyone has integrated it into the UI/UX.
Certainly nothing as sophisticated as Apple's trackpad.
I openly admit it these days. If I had a VCR today I'd be the one struggling to figure out how to record something with it, even though as a kid I loved that stuff. And I'm only in my 30's. It's not really about whether I'm able to figure it out (if 8 year old me could figure it out, 30+ year old should be able to as well), it's about whether I have the patience for a badly designed UI.
One of the common responses I see when I complain about a bad UI is someone saying "no it's easy you just do ____". It's frustrating because that's not the point - the point is that I had to do research to find the answer!
There are cases, especially with specialized tools for complex tasks, where you are expected to spend time learning how to use the tool effectively. Photoshop is a great example. Vim is a great example. Figuring out stupid stuff like whether that grayish light blue vs dark gray fill on the checkbox means "enabled" or "disabled" is not an example.
I am sometimes amazed how complicated UI can someone create with e.g. four arrows and two extra buttons.
Like, I would naively assume that the arrows would move your selection up, down, left, right, and one of the extra buttons would be "select/confirm" and the other would be "escape". But no, that would be too easy. Depending on exact position in the menu, sometimes the right arrow, sometimes the first extra button, and sometimes the second extra button selects the highlighted thing. Sometimes up and down arrows increase the selected thing (e.g. volume, brightness) and you need to press one of the extra buttons to stop doing it. Sometimes one of those extra buttons acts as a hotkey that will switch you to a completely different menu, so it will take five minutes to navigate back.
Then sometimes a confirmation windows opens, with two buttons "yes" and "no", one of them is pink and another is light blue, if you press any of the four arrows, the colors switch, and you are free to make your guess whether pressing the extra button will confirm the pink option, confirm the light blue option, take you to a completely different place, or turn off the menu.
Monitor buttons are particularly awesome. Often you can't even see what you're pressing because it's behind the bezel or just an unmarked capsense location, and then whether it's up, down, increase, decrease, forward, back, reset, etc. is all different for each page of settings.
Bonus points: turn the monitor to portrait mode and do it sideways.
Both monitors on my desk have a single button on the back side of the bottom right corner. The button can be pressed or pushed in each of the four cardinal directions (left, up, right, bottom). It's actually really nice. Since it's only one physical button, there's never any guessing which button you're on. And once you know the menu structure, common tasks can be done completely blind. (For example, I know that switching between input sources is press-left-press. I can do that without even looking at the screen.)
> If I had a VCR today I'd be the one struggling to figure out how to record something with it
That's a little hard to believe. Nothing is hidden on a VCR. There's input connections, a tape, and a big record button. At most there is an additional source button to choose the input source. It's all there, in the open, and not hidden.
Today's interface design is really all over the place. There's just too much stuff hidden and inconsistent. The source of this is our ever increasing reliance on software. If one thinks of writing software as both storing knowledge and communicating with people, it makes sense why software experiences are so bad. Communication and knowledge transfer are in general difficult processes, and now we've automated them at scale but hurried and botched them. Something even more is lost in translation. When we use a software-based product, we're experiencing all the stored miscommunication embedded in the device.
Nothing is hidden on a VCR. There's input connections, a tape, and a big record button. At most there is an additional source button to choose the input source. It's all there, in the open, and not hidden.
Some VCR’s had a timed recording setting. More commonly, and you see this on cars before they got touchscreens, there were settings and options that you had to navigate to by hitting a specific sequence of buttons to get to the appropriate menu.
The core loop was generally fine. It was when you needed anything else that suddenly you needed the manual.
My new air conditioner control panel suffers from this. It has up/down/left/right buttons where it switches between horizontal navigation plus select/deselect or vertical navigation plus select/deselect randomly. Programming a day takes around 30 key presses. And there is no bulk update functionality.
I recall VCRs having a lot of nonobvious digital controls that involved primitive onscreen HUD that involved arbitrary buttons to navigate. Everyone understood play/stop/record, what was confusing was setting the on-screen time using those same buttons.
That was only on the last generation or so of VCRs. For the first decade or two, programming them was like setting an old digital watch with 2 or 3 buttons and making different digits on the clock flash.
I disagree with "last generation or so", our one from the mid-to-late 80s had it as an option (you could set a single recording in the way you describe, but needed the TV display for multiple recordings). Apparently the Akai VS-2 was the first with an on-screen display on the TV, in 1983.[1]
The other hidden complexity was reception and tuning - so many people had problems getting the antenna pass-through working, and then getting the VCR picture to display on a TV channel (no switchable inputs in those days).
Maybe it is that same "Older so I lack patience for poor design" but nowadays when I play a game I often - not always - have a web page on my other monitor so I can look up how to do XYZ unnecessarily complicated stupid shit. Crafting recipes are the biggest failures.
Warframe is the THE game I can think of doing this most. Even when I learned how it all worked, I needed a bunch of reference tables/posts so I could grind with maximal efficiency!
Nowadays, if a game has "crafting" I am 90% inclined to skip it.
Being tech savvy can be a burden when it comes to discovering new features because you're afraid of making a "stupid" mistake. Accidentally posting to Facebook is something the tech illiterate do, not us, so why would we click a random icon when we're not sure if it will do something embarrassing?
I know my willingness to explore UIs went down a lot when
1) All the buttons became icons instead of text that clearly explained what was going to happen and
2) My phone became the place where all of my social media accounts were always logged in and all my internet activity was centralized. It became easy to accidentally share between different worlds.
> It became easy to accidentally share between different worlds.
This has burned me more times than I’d like to admit. I have a macbook for work and activated iMessages for my personal account. One day I accidentally posted something in a chat room meant for someone I was chatting with in iMessages. It was kind of personal but thankfully vague enough that I could brush it off. I also was on some screensharing sessions when I switched to iMessages which briefly showed personal conversations. After a few times of that happening I disabled my iMessages account for on that machine and made it a personal rule to never mix business and personal activity on any company-issued device, which has served me well over the years.
Yes! I've lost count of how many times I've e.g. opened Google Calendar from my current Gmail window and have it pull up the calendar on a different account with no warning that it's doing that. (If I'm not logged in on the other, it asks me to log in with no option to go to the original.)
Somehow they made it impossible to specify or indicate the account in the URL to squash any uncertainty, you just have to hope the cookies are right.
It's actually worse than that - the account is indicated in the URL, but it's expressed as an offset into a list of accounts in the order you signed into them!
So a ".../mail/u/0/" means the first Google account you're logged into, ".../mail/u/1/" means the second, etc. But if you did it in a different order on a different machine (or even browser), it'll redirect to a completely different one.
I would love to have been a fly on the wall in the meeting where it was decided that was a good idea.
I am a UI spelunker. I right-click, option-click, dobule-clicke, force-press every little interface element and I am so happy when the software delights me with a surprising feature.
I type the konami code as soon as I discover a web app supports keyboard input. I have no problem pulling-to-reveal and shaking to undo. In fact, it makes me feel awesome to know all this little secrets and have people all like: Wait! how did you do that?
But when I design software I don't do it for people like me. I want my users to feel awesome and delighted without being UI spelunkers. Good design doesn't make you think.
Excessive information can be overwhelming. I fully hate these design decisions, but I can see where the original drive to reduce clutter came from. This seems like "way too much of a good thing" rather than an outright bad thing.
Oh wow! Thanks! I never knew that. I thought that info just wasn't accessible. It should be always visible ... at least, for the most recent message, since you'll want to know how old it is.
Side note, I've always suspected it's an issue of too many cooks in the kitchen. They spend tons on UX but end up with something barely usable.
Once you discover once that swiping sideways on a message reveals metadata, you notice that this is transferrable to most messengers on iOS (all I use). The first time I discovered that was by accident while playing with the device. I notice that I pretty much never use this feature.
Point is, you shouldn't have to "discover" anything about an interface. If it's hidden from the user, it's an easter-egg that should be nothing more than amusing, but useful functionality should never be deliberately kept from the user.
I use it all the time. I work off-shift so most of the messages I receive were sent quite a while ago, and pending when I wake up and grab my device. Knowing when they were sent is a very important bit of context.
If you spell everything out all the time, you will have laughable information density and hurt your user’s productivity. I would strongly object to that.
It seems natural to spend a little time exploring the interface of an extremely complicated instrument that is an integral part of your daily life, reading Tips, etc. That also means if this device is not exactly that heavily used (say you have assistants for that), you are spared of wasting your cognition cycles on simply being aware about all the features it has in store.
FWIW search field in Apple’s own apps like Settings mentioned upthread is revealed to me every time I scroll from the bottom back up to the top of the list due to inertia, I don’t know if they changed it recently or not. (Other apps often make a mess of the same UI pattern. In WhatsApp it is much more difficult to discover, requiring to really pull down and wait to see Archived Chats, and meanwhile Search field is just there wasting screen space despite being used once every few months.)
That said, I don’t have a strong opinion about message timestamps in particular—I believe there are ways of showing them smartly if you haven’t used your phone for a while and missed a bunch of messages without overloading the UI for people like me who couldn’t really care less about that (RIP my IRL social life as of late). Perhaps Messages could use more attention from a good product designer.
Even worse than undiscoverable features are land mine features, where some subtle gesture triggers a wild and unwanted mode change. Extra credit if the mode change is hard to get out of.
There's a windows feature like this which I hate with a passion. If you grab a window and shake it side to side, it minimizes all open windows...
So sometimes I'll grab a window to move to my left monitor, decide instead I want it on the right one, and suddenly everything is gone and I now need to go open every single window one at a time. It even took me months to understand that the shaking was what caused it. I would just sometimes be working and suddenly everything is gone and I had no idea why.
If you're refering to Minecraft, one of the reasons why Minecraft blew up that much was that it was around at the exact right time in history: when online gaming had become viable and let's-plays were just becoming a thing. Minecraft, as originally intended, is a communal experience where you learn from the friends you're playing with and from sources on the internet. If I had to describe Minecraft in one sentence, it would be "let me show you that cool thing I saw Pewdiepie do last week". The idea has become commonplace in video gaming at this point, but back then it was fairly new.
(That said, opaque game experiences have been successful before, like the original Legend of Zelda. I guess the schoolyard served a similar purpose back then.)
I also very much hate this feature, and trigger it often. However if you shake the window you have again, all the minimised windows should be restored.
Agreed (re shake again to reverse), but not always - ie if you let off the mouse click , there is no option to shake again to undo the initial shake. Bottom line , I disableded this “feature”.
I do think this is one of the best examples of the issues called out by this article/thread.
(Or maybe how google docs has a button covering the scroll bar slider once you scroll to the very bottom of a docs file)
This isn't correct. There is something that happens which makes it forget which windows were open. I am not sure what it is. But is definitely the case that sometimes, I can shake again to put everything back the way I always wanted it, and sometimes I cannot.
You can also disable it but if you want to keep snappy windows you have to do it through a regedit since the two features are not independent for some imo stupid reason.
The most amazing thing about this feature is that no matter how much I try, I never seem to be able to re-shake the window in the proper way (which would bring all windows back up again). But nice that it's at least possible to disable it via a registry hack.
Upvoted because I never knew about this. As a user who prefers keyboard short-cuts, I usually press Windows key with D (alternatively, right-click on the task-bar and choose Show Desktop) if I want to minimise all windows.
Anyhow, I just tried shaking a window from my work laptop and discovered that this behaviour activates on the second change of the direction in rapid succession.
While I was experimenting, I also discovered that shaking the window again restores the other windows so you don't have to restore them individually if/when this happens accidentally.
The one I'm unlucky to hit is the "icon resizing" that happens when a moving action on the thumbpad is performed in combination with the pressed ctrl key, which sometimes is not actually pressed but the OS believes it is (some "zombie" state). Happened to me on different notebooks and even on the desktop.
I don't think anybody needs that "quick" way to do that, and that nobody needs to change the icon sizes often. I have no idea if that misfeature could be blocked.
I've been back on my windows machine recently for a new side project and, ironically, I've really enjoyed this feature.
It's akin to shaking my head before focusing on a specific task. I shake the task window and the other applications that I still want open but won't be using for a bit all hide away.
There's another "feature" which I rarely hit, and I don't even know how I did, what it's called or for, but when it happens, a draggable divider shows up in the middle of the screen, all my windows get "crushed" on the left side, and the right side is a dark blue/purple blank. I can drag the divider to restore the desktop, but it seriously angers me since none of my windows' original positions nor sizes get restored. Unfortunately, I have no idea how to disable it either. I believe it appeared starting with Win8, as I never saw it in earlier versions.
That sounds awful - glad I've never encountered it. FYI, you can use a lot of the built-in window management functions via keyboard shortcuts - Windows key + Shift + [Left|Right] arrow key moves a window between monitors.
Windows keys started the nightmare for me decades ago. So nice hitting one of those while gaming full-screen!
This conversation happened a thousand times: "but you can hit them again and restore what you were doing"... no, I can't, specially if I have pressed a dozen keys since.
I used to buy IBM keyboards from more civilized times, but my dealer disappeared, maybe he was arrested?
My rather new HP laptop has scroll lock hidden on the unlabeled combination fn+C which is obviously right next to the daily used ctrl+C. It is trigger, and it doesn't LED or any OS indication that it's on or off. Good luck understanding why arrow keys are broken in Excel :)
Better yet, use any keyboard you like, and remap (left) Windows to a different key code, ideally corresponding to some key that's not present on your current keyboard layout and that, unlike the Windows key, is fully remappable by AutoHotkey, in-game options, etc., using something like
I don't play much lately, but I do have a Keyz Iridium. It was the cheapest mechanic keyboard I found, at least what feels more similar to mechanic. Don't get me wrong, the trippy colours are nice too. And... it has a combo to disable Windows keys!!
The one that always got me was the feature on OS X where if you move your mouse to another screen and move down, it moves the dock to that screen. It is possible to do without meaning to and it is fairly loose with how it determines that you've done that, and so the first year or so I had 2 monitors, I'd do it once or twice per month without realizing how it happened.
Oh god. My phone has one of those - there's some sort of swipe gesture which puts it into a split-screen mode, which I have only ever invoked accidentally, and I have no idea how to get rid of it - none of my various guesses at possible anti-swipe actions have worked. When I pull my phone out of my pocket to discover that it has gotten into this state, I just reboot.
> Oh god. My phone has one of those - there's some sort of swipe gesture which puts it into a split-screen mode, which I have only ever invoked accidentally
Somehow my 1 year old son can invoke it repeatably every time he gets his grubby little hands on my phone...
Settings | Home Screen & Dock | Multitasking | Allow Multiple Apps
No idea if the same setting exists on iPhone or if it disabled split screen through.
The whole feature is so badly designed on iPad. I was forever detaching a Safari page and ending up with two Safari apps. So tapping the Safari icon wouldn't take me back, it only takes you to the detached page, and you think you have "lost" all the other pages. You need to close the detached page (none of that would be obvious to a novice). Also getting a page to detach into a popup by mistake, and then not being able to get rid of it (I still don't know the correct swiping motions!)
On older Android a press and hold on an SMS conversation brought up the context menu options, of which there was only one: SPAM - block this number.
So if you accidentally hold the press for a fraction too long you block that number.
And then you wonder why your friend(s) don't love you anymore...
To unblock it, you have to go to the Contacts and dig through options there, the SMS app showed nothing of help. It wasn't obvious that you had it blocked!
Let's look at a typical novice's session with the mighty ed:
golem$ ed
?
help
?
?
?
quit
?
exit
?
bye
?
hello?
?
eat flaming death
?
^C
?
^C
?
^D
?
---
Note the consistent user interface and error reportage. Ed is generous enough to flag errors, yet prudent enough not to overwhelm the novice with verbosity. [1]
I do think age can play a role, or rather experience. As you live through more UI churn the less your willing to invest time into each new UI generation or fad.
If an application isn't either essential to me or incredibly obvious to use, I am unlikely to learn it's fancy new UI. Why bother?
This extends to APIs and DSLs too since there are often so short lived.
It might be not so much age as experience. With experience comes expectations of what a control should look like and how it should behave.
* It's the experienced users who were were less likely to recognize the "burger menu" or the three dots (does that have a catchy name) as something potentially useful. They didn't look like "real" controls. It was the less experienced users who just naturally tried clicking on anything near the top that looked interesting, or waving the cursor around looking for hovertext on active components.
* Confronted with a landing page that seems to have nothing more than some splashy graphics and a few words - notably no scrollbars - experienced users are likely to wonder if the page is broken for them (because that was a very common experience at the height of the browser wars and still persists). Less experienced users who grew up with smartphones are more likely to start swiping right away.
A user who was older but not experienced with computers would align more closely with the young 'uns on most of this. When everything's completely new you just start experimenting. It's when you think you know things work but they don't work that way any more that things start to get frustrating.
> As you live through more UI churn the less your willing to invest time into each new UI generation or fad.
Hell yes. In most instances, I'm way beyond this point at only 30 years old. Come to think of it, it actually makes me wonder how I will feel about UIs at 50 or 60 years old.
My suspicion is that Apple’s design mentality is to strive toward the minimalist ideal of an interface in which everything necessary fits on one (phone) screen, and users are acclimated to ask Siri for anything/everything else, e.g. “Siri show me a print/download dialog”
I use Google Assistant for the only thing it's good for: setting a timer. Even then, for some reason, half the time I say "ten minute timer", it hears only "ten minutes" and for some reason thinks I want it to search youtube for a video of a ten minute countdown. I mean, that's completely insane.
Dear Google developers: if you decide to randomly delete a somewhat irrelevant word from a search query and you're left with "ten minute", it's far more likely that a person wants a ten minute timer than that they want to search your database of videos for ten minutes. Also, why do you search youtube instead of the web? Are you doing this specifically to irritate the hell out of me?
Yep, setting times is easily 95% of my requests to Alexa. The other 5% is either asking about the weather, or joke questions like: "how many fathoms is it to proxima centauri"
>I think people in the SV/HN bubble play the "what about the elderly?" card too often, because we're afraid to admit that we, too, don't know how stuff works anymore.
Perhaps but even when I don't know how an app works I can usually figure it out, either by trying different things or just googling. Older generations tend to be afraid of "breaking" the device and avoid doing anything they dont already know.
No, they're not "afraid of 'breaking' the device", they simply don't have time for that shit. And there is absolutely no excuse for that shit. If you're writing the latest descendant of Myst, sure, go for it. If it's not an adventure game, there should be no essence of adventure game in the ingredients list. At all. Ever.
>No, they're not "afraid of 'breaking' the device", they simply don't have time for that shit.
When the older folks in my life tell me exactly that I take their word for it. They usually they no issue being brutally honest about other aspects of life.
> I think people in the SV/HN bubble play the "what about the elderly?" card too often, because we're afraid to admit that we, too, don't know how stuff works anymore.
I suspect I don't know some crucial (in Apple designers' mind) swipes on my iOS devices, because they're just hard to discover.
There's a language to UIs that evolves over time. The menu hamburger, the on/off circle with a line. Take those back a decade or more and you would have the same problems, but eventually they percolate through people's experiences and expectations and you can rely on people knowing what they are now, for the most part (it helps that the on/off icon is used for physical devices too).
Pull down for search is something that it maybe Apple is trying to cement (and only for mobile, I guess?), but hasn't crossed (and maybe won't) to Android devices. Apple is known for doing stuff like this and pushing changes, sometimes it takes, sometimes it doesn't (e.g. no floppy in laptops, no CD-ROM in laptops, no Mic jack, etc)
The reason why they pull the "what about the elderly?" card is simple. Most of us can figure it out, even if it's stupid. My parents however have a much, much harder time trying to figure out how to do something that is counter-intuitive. It's more about problem solving and experience with existing user interfaces than anything else. Designers nowadays definitely design things in a manner that suggests they assume some pre-existing knowledge of UI patterns.
I have this complaint about iOS all the time that often the "clean look" in iOS leads to grids that exactly match the available viewport and without scrollbars you have no idea if there is more to scroll without idly swiping things around.
I notice that becomes something of a tic of iOS users, if you watch others, in that just about every new screen or panel there's often at least a squiggle of pushing the screen or panel around just to figure out the boundaries.
One of the things I thought was brilliant about early Windows 8-era "metro" design: it was the only era Windows briefly experimented with going scrollbar free (it went back to scrollbar "light" soon after) and when they did so they absolutely made sure that the application grids never lined up with viewport. If there was something to scroll, it would stick out a bit, always that little hint that there was something more further down or right.
It got some flak, especially from macOS design fans, for not being "clean enough", but you'd have apps with no visible scrollbar and you knew whether or not there was anything to scroll just looking at them. You didn't have to squiggle your finger around (or fiddle with the scroll wheel) to check.
Oh so I'm not alone (iOS user here). There have been parts of UIs I have totally failed to notice because I didn't realise you could scroll, because it didn't look like anything was beyond the screen.
"Once upon a time, Apple was known for designing easy-to-use, easy-to-understand products. It was a champion of the graphical user interface, where it is always possible to discover what actions are possible, clearly see how to select that action, receive unambiguous feedback as to the results of that action, and have the power to reverse that action–to undo it–if the result is not what was intended.
No more. Now, although the products are indeed even more beautiful than before, that beauty has come at a great price. Gone are the fundamental principles of good design: discoverability, feedback, recovery, and so on. Instead, Apple has, in striving for beauty, created fonts that are so small or thin, coupled with low contrast, that they are difficult or impossible for many people with normal vision to read. We have obscure gestures that are beyond even the developer’s ability to remember. We have great features that most people don’t realize exist.
The products, especially those built on iOS, Apple’s operating system for mobile devices, no longer follow the well-known, well-established principles of design that Apple developed several decades ago. These principles, based on experimental science as well as common sense, opened up the power of computing to several generations, establishing Apple’s well-deserved reputation for understandability and ease of use. Alas, Apple has abandoned many of these principles. True, Apple’s design guidelines for developers for both iOS and the Mac OS X still pay token homage to the principles, but, inside Apple, many of the principles are no longer practiced at all. Apple has lost its way, driven by concern for style and appearance at the expense of understandability and usage."
When my girlfriend got an iMac I spent a minute or two trying to find the power button. I think it’s a good example of form over function. You can’t even turn the damn thing on.
The part about recovery and the lack of a universal undo/back function and instead offloading it to the developer, resulting in an inconsistent mess really drives the point home.
What's most telling is how even Apple often does an astonishingly bad job at it - it really sets the precedent regarding diminished importance.
As an exercise: Browse the App Store for a while and count the number of different ways it implements "go back one step".
It's funny how when the world ran at 640x480 we had scroll bars, separators and buttons with 3D effects, but nowadays with higher resolutions and bigger screens designers strive for the ultimate clean design. Hidden elements that appear when certain gestures are performed, flat buttons that are indistinguishable from labels/text with a colored background, borders and separators get removed or turned into barely visible thin lines.
One time, I thought I was going crazy because I could see a file with ls, but it wasn't in my Finder window. The problem was the hidden scrollbars, and the file was outside the window.
I changed the setting to Mac OS always shows scroll bars, but I have no idea how someone thought hiding them was a good idea.
The first thing I thought of while reading your comment were graphical adventure games, where you pretty much had to click on everything to see what did something.
The fundamental difference with modern user interfaces is that you now have to tap, drag your finger across the screen in different directions, drag both fingers across the screen in different directions, pinch, unpinch, and whatever other gestures seem to be relevant to the situation. For good measure, try gestures that don't seem to be relevant either because the software may surprise you!
Now I know that a lot of these user interface decisions do offer value, but they are only valuable if you know that they exist.
I use this every single day on my locked-down work laptop. It fixes so many sites! I recommend it as often as I can, and I'm surprised that its use is not more widespread.
Here's that code written by someone who has used Javascript in the last 10 years:
(function () {
for (const node of document.querySelectorAll('body *')) {
if (['fixed', 'sticky'].includes(getComputedStyle(node).position)) {
node.style.display = 'none'
// node.remove()
}
}
})()
I found tree mutation too aggressive in the first site I tested it on (the site's JS code expected the navbar to exist, so it just blew up with runtime errors). Simply changing the display property is sufficient.
Ah, thanks, looks much better! Yeah, I just took the original and quick'n'dirty-added the second condition, all the while thinking "hmm..." - and quickly losing interest again. :)
Regarding tree mutation - I have to admit, it gives me some petty satisfaction looking at the errors thinking "Where is your sticky NOW!"
I use this probably a hundred times a day :( It makes me so sad.
I've considered writing an extension to kill them all by default, but I'm afraid I'll break some site in a way that really confuses me and end up wasting more time figuring out what happened than I save with the extension.
This is great, but to use it I need to sticky the bookmarks bar to the top of my screen when vertical space is already precious. Does anybody know something else that does the same thing without this issue?
Firefox has a bookmarks sidebar that pops up if you press Ctrl+B (or buried inside the hamburger menu, but Ctrl+B is the easiest way to access it). You can put the bookmark in there, and then only open the bookmarks menu when you want to access it.
At least in Safari, your favourite bookmarks are accessible by ⌘⌥#. I've got Kill Sticky Headers on ⌘⌥1, so it takes no screen space but is just a keypress (or three) away at any time.
> For anyone (cough, elderly parents) who aren't adept at discovering hidden features.
I'm in my mid-30's and I find Apple awful at discoverability.
Take using Apple Pay. I don't use it often, but covid has had me wanting to. I can't ever seem to remember what sort of hand waving I need to do to get it open. I seem to occasionally bring it up when I don't want to. (It's triple press the home button when the display is off, long press is Siri.)
30 here. On my iPad, if I bring up an app in the floating view, I often have to rediscover how to dismiss it. I just tried to do it now and this is how it went:
My first instinct was to flick it up from the bottom, but that brings up the recents switcher. Next instinct was to try flicking it off to the right, but it just moves a bit and springs back. While trying more times to fight the spring I noticed a short bar at the top and realized it was probably a drag handle. Grabbed that and tried to flick it downwards. No luck, but it moved in a way that made it seem like dragging down more would do it, but nope of course that wasn't it either. How about a flick to the right? Nope. Dragging it to the right? Agh! It's trying to dock into split screen mode! I dragged it around some more hoping for different behavior before dropping it back into place and again trying a flick to the right using the handle at the top. And that FINALLY did it. Apparently I didn't put enough enthusiasm into the first flick I tried... What a pain in the ass.
I don't like how sensitive to speed some of the actions are. I have similar issues with bringing up the dock. I usually end up going to the home screen instead.
I struggled with this too. You can use the mnemonic that it behaves like an iPhone(the new ones without the home button) on top of your iPad apps. The gestures are the same then. Swiping from the bottom of the floating window behaves the same as it does in an iPhone screen. Not very intuitive but I guess it’s the same framework.
Good games are pretty good at discoverability like this. I remember playing GTA IV or V and if I hadn't been using some skill, it surfaces tips. Apple has to be slightly more careful because it can't just spam you with "Double-tap to Apple Pay," but recognizing that you just used Apple Pay, but you unlocked you phone and did it from Wallet might mean there's a new skill they can show you.
I was thinking about this as I recently started playing "Ultimate NES Remix" They are quick mini-games made from old nes games. These quick little mini-games really make good tutorials on how to actually play the real game they are taken.
No, gotta put your credit card info into the wallet app before use as it needs to verify it's your account. Also, Apple Pay doesn't accept all cards. My main card isn't accepted, which is sad.
Way before I ever tried, I had seen videos of RFID "contact less" demos where you tap the device to the machine. Like at a gas pump. The icon usually gives you a clue to tap as well, but I suppose you have to imagine it is even a theoretical possibility.
I don't think there is way to make it discoverable without a big sign saying tap here ↓. Not from the phone anyway. Just something children will see and in the future never think twice about.
Especially with the usual screen ratios nowadays, their huge width goes unused 99% of the time anyway. So there's an only 1% incentive from usability side to remove the right vertical scrollbar...
As a matter of fact, I actually use in my Firefox vertical tabs stacked left, to spare me some vertical space on the laptop screen.
That means that even if I run Safari full-screen, which is inconvenient, I have only 680 vertical pixels for content.
It's amazing how many web sites I run into that throw up fixed-position pop-ups that are bigger than that, and I can't close them or scroll to see their content because the designers or middle managers decided that every person on the planet is rocking a 27-inch 5K display.
Usually I just close the window and never visit the site again. If it's important, I break out the dev tools and start invoking display:none until I hit the right element.
If I might ask, what are you using for vertical tabs? I haven't found a good add-on that shows the tabs on the side and also hides the tabs on the top.
Giving the user clear feedback about where they are in a page and how to modify their location should not "increase cognitive noise". It should in fact offload cognitive load from the user onto the interface leaving more room, so to speak, for the user to use for themselves.
It’s so strange to me to read these kind of comments, and to even see this article.
I detest scroll bars. When I work on my windows machine I cannot stand seeing them plastered all over the screen.
It’s actually quite fascinating to read all these comments - it’s so weird to see people complaining about how they don’t know when a scroll is available. Just scroll and see!
No negative feelings toward people who prefer them - it’s actually quite interesting to see the comments, I assumed everyone agreed they were better hidden until today! (Which, as a Mac user, is obviously part of the problem being expressed!)
> how they don’t know when a scroll is available. Just scroll and see!
Why should I try scrolling everywhere all the time instead of having the UI indicate to me that there is more content? Especially if it isn't clear by having a word or sentence cut of. Maybe the scroll area ends with a paragraph and it's not clear there is more. Why put aesthetics over usability?
Some other comments here claim iPhone users develop some kind of swipe tic where they instinctively try to scroll around on every screen that opens. Would be a pretty sad testimony to this UI concept.
Because some things are scrollable and it’s usually fairly obvious if there’s going to be more content.
If it’s not clear to you, I totally get why you think that.
But I don’t suffer the same issue.
Why should I have to have scrolbars plastered all over the show to tell me that something is scrollable when I can check it myself in less than a second?
I understand why people disagree - why is it so hard for you to understand that I disagree?
Generally it feels like the pro scroll bar opinion is actually quite aggressive, needlessly so.
I've been super annoyed by trying to grab the scroll thing on the bar in a super long pdf. It keeps disappearing and on long PDFs where I want to scroll to, say, the middle, it disappears before I can precisely grab it. It's maddening sometimes.
I resort to the down key to make the scrollbar appear without losing my position in the doc. In fact, in Preview, my default workflow is to click on the main content panel as soon as the doc open; else the down key might take me to the next page, which defeats the purpose.
Discoverability is also a fascinating challenge for those home voice-control devices. What exactly can an Alexa do? Did it get any new features recently? Who could say? You could may be get an email telling you about its cool new ability to rename timers or something, but other than explicitly reading about the new features in a manual or newsletter, how would you possibly think to ask?
I feel the same way about other "unspecified functionality" services, like fancy hotels with "personal butlers." What does the butler do? "Whatever you need" is not an answer. I don't have familiarity with butlers, I don't know what is and is not a reasonable request.
There has always been a struggle with UX design, people want simplicity but at the same time they want clarity and complexity of ideas. People want great knowledge, but they don't want to read about the nuances that makes it a great knowledge.
There are valid reasons for both sides of the equation, it is just difficult to find the balance to satisfied the myriad of different perspectives.
Please don't blame people for wanting when it is "the design" department that forces these changes on people.
How about people (designers) start thinking about their users and some some strict adherence to someone else's opinion about Material Design. I hope we see Material Design like we see touch screens in vehicles. An embarrassing mistake at best and potentially fatal at worst.
Design should always serve the user. If it doesn't, it needs to go away.
I think we may be seeing an organizational issue at work. The job of a design department is to produce design. If there isn't enough new stuff to design, they will redesign old stuff. This serves to advance the critical goal to the design department of justifying their continued existence.
The number of designers who will march up to their VP and boldly declare that all the company's apps are just fine and don't need any kind of redesigning is small.
You get a similar thing any time a development team runs out of feature work or actual stress points in an application they maintain. They wind up reworking the thing mainly to have something to do to justify the continued paychecks until the business thinks of an actual reason.
Combine that with the needs of designers to "make the experience unique" to a level where consistency goes out of the window. Every commercial software these days has its own idea of design and it gets irritatingly hard to learn a new set of usage pattern for each.
Granted, the developer side of the story might happen as well but I'm guessing not that much (all the dev teams I've worked in saw an overloaded pipeline of work so it is very hard for me to imagine otherwise)
it is just difficult to find the balance to satisfied the myriad of different perspectives.
We used to achieve this balance with a manual.
A printed manual. Not a URL printed in 3 point Helvetica in light gray on a white background on a tiny leaflet in amongst a dozen similar-looking regulatory leaflets that will all go into the recycling bin when the new shiny shows up.
So just a couple of days ago I decided to revamp my VPS VM host. I ended up putting Ubuntu on a VM.
So I connect to it via RDP for the first time, and it is using XFCE. I'm not a big user of the desktop, I have no idea what the choices are or what features are offered in Gnome or KDE, or whatever. I just sometimes need a GUI to do some random things.
I open a window, and decide I need to resize it.
There's no cursor change when you move the mouse over the edge, or the corner. It just doesn't let you resize the window. There's no three-lines things in the corner either, you just can't grab the corner to resize it.
I ended up having to google that you need to ALT+RMouse to do this.
I don't know why that would be the default. It seems crazy.
This is how my WM (dwm) works (except I patched it to use Super instead of Alt), and it’s a better way of resizing (not requiring such finicky mouse placement). But that is really weird to hear XFCE doesn’t allow both methods like JWM (and I would have assumed, any WM not aimed at relatively extreme minimalists and power users), I wonder if it’s a bug.
> But that is really weird to hear XFCE doesn’t allow both methods
Don't blame Xfce for that but Ubuntu. For some reason they decided to make the resize area 1 pixel wide. It's specific to recent versions of Ubuntu. It worked fine on LTS 16 (or was it 14?) and on Debian.
There are several ways to fix this without needing to be aware of the window's off screen position.
- From the keyboard WIN+{Left|Right} resizes and pins the window to the side of the screen
- From the keyboard WIN+Up to maximize the window (or right click on the task bar and select maximize. Now, drag the title bar from the top and the window will unpin from the screen and move with the mouse
- Right click on the task bar and choose any of the windows arrangement options (Stack, tile, cascade)
Scroll bars weren't really "hidden", though; they were obviated.
At least on macOS, all the mouse-like input peripherals that Apple will sell you (both the Magic Mouse and Magic Trackpad), and all the inputs for their laptops, have two-finger "natural" scrolling.
Apple seems to expect/assume that you've scrolled a touchscreen at some point in your life (in fact, many people have scrolled more touchscreens than desktop computers at this point—including many old people!) and has built the OS around the idea that, just like on a touchscreen, scrolling is a gesture that you can attempt to apply anywhere, whether there's an affordance for it built into the app or not—i.e. that scrollability is a universal, system-level operation, not something up to the application. Every application in macOS/iOS is inherently built in terms of having some kind of "viewport" with a "document" loaded into it; and scrolling moves the "document" around within the "viewport." Even if that's a pointless thing to do, it still attempts to perform the operation.
(Though in terms of feedback, iOS has a more coherent responsivity to "attempted" scrolling than macOS does. You can "tug" at the top and bottom edges of the screen in pretty much any iOS app and get some extra out-of-document blank space, with a snapback when you let go. Whereas, in macOS apps, this only happens for "content" regions; whereas for "chrome" regions—e.g. the top-level icon listing in System Preferences—the region will only have snapback if the window is small enough for the region to be scrollable. If the window is large enough that everything is presented, your scroll gestures just go unacknowledged, as if the window wasn't a "document" in a "viewport" at all. Interestingly, you also can't resize such all-chrome windows; you only ever get a scrollbar on them if running on a computer with a too-low display resolution.)
What is missing, though, is a visual indication of whether scrolling will do anything, before you try it. Scrollbars used to help with this, yes. But the fix isn't simply reintroducing them. The world scrollbars were built for no longer exists: both web apps and modern native apps now do progressive loading/"infinite scroll", where the view-controller isn't necessarily aware of whether more content will be discovered when it tries to demand another chunk of data from its backend. Even when you force-enable the scrollbar, the size and relative position of the "scroll-thumb" now communicates no information about your "actual" scroll position in many apps.
As well, there are now real infinte-canvas apps (like Maps) where a scroll "position" wouldn't even be a coherent concept, because the document under the viewport is a self-connected torus. Universal gesture scrolling adapts well to this concept; scrollbars don't.
IMHO, I'd like to see a slight fade-to-black or 3D "bend away from camera" applied to the inner edges of the scrollable viewport, whenever the document in the viewport is not sitting flush with that edge of the viewport. This would provide a visual affordance of "scrollability" without providing any often-misleading information of relative scroll-position. Infinite-scroll documents would just always be "faded out" on all edges. Seems obvious?
> I don't really need the small sliver of menu space in PDF view to be reclaimed -- and for what, a "clean" look?
Nah, it's for shitty low-end laptops that still to this day have 1366x768 displays. Stick a title/tab bar, tool bar, and maybe an always-on bookmarks bar on top of the window, and an always-on start menu at the bottom, and you'll find that the PDF only gets about 400px of viewport real-estate. And now you want the PDF's own controls to steal more of that? There's a reason that Chrome first made the status bar into an overlay, and then merged the title bar with the status bar — it's trying to reclaim vertical space for exactly these constrained scenarios.
> it's for shitty low-end laptops that still to this day have 1366x768 displays.
We had scrollbars on 640x480 both ways and liked it. Even SGIs with 1280x1024 monitors had scrollbars, and it wasn't a burden that some folks today seem to think it was.
The UIs of the OSes designed in the 640x480 era used 8pt system fonts, and were very careful about "spending" vertical space. The UIs of the OSes of today are not optimized for the same scenarios.
Here's what Windows XP (not even really from the 640x480 era!) looks like on a modern display: https://ibb.co/W5mVgHT
You could fit a lot of Windows XP, and XP-era apps, on a 1366x768 display. You can't fit much of a modern OS+apps on one.
(That's honestly for the best; text and icons in modern OSes are both a lot more legible due to the increased fidelity and breathing room they have. But display resolutions have to keep up; and, at least for low-end PCs, they aren't.)
What you're showing here is lack of high DPI scaling and it's a completely different issue irrelevant to the GP's point - it's caused by increased pixel density, not screen space (unless your example comes from some 40" monitor or so). To make an honest comparison you would have to set the output resolution to a lower value.
When done right, increasing screen density doesn't really affect available space, just makes everything sharper (unless you manually reconfigure it to get more space, like I do on my hidpi laptop).
Okay, maybe running XP natively on a 4K display was a bit silly. But here's XP in 1080p: https://ibb.co/BB5rsHg
UI elements are still far smaller than comparable ones in Windows 10 or macOS, when those are running at 1080p unscaled. On XP, IE and Explorer both manage to fit four toolbars (rather, a title bar + three toolbars) in 105px of vertical space. On my own macOS Catalina install, Chrome manages to fit only two toolbars in 86 device-independent pixels of vertical space.
Likewise, just consider the fact that people weren't generally looking at PDFs back in the XP era, but rather plaintext, which—as rendered on this image—was exactly 8px tall. A lot of 8px-tall text fits on a 1080p display, no matter how many toolbars you have; and a lot of 8px-tall text will even fit on a 640x480 display, if you make efficient use of the vertical space. But because PDFs use vector fonts, not bitmap fonts, they will look unreadably muddy if you attempt to scale one down such that the text only uses 8px of vertical height. You need more vertical space to look at PDFs.
So, in combination, when you throw a Windows 10, running Google Chrome, displaying a PDF, on a 768p display, it's just an ugly constrained problem.
As mentioned the dpi reasons you described are somewhat unrelated. But there is one factor, the move from 4:3 to 16:9 ratio monitors, which is less efficient in terms of usable area.
Given that most scrollbars are vertical however, it isn't a big deal. I've gotten more vertical space from putting my taskbar on the left (when 16:9 became ubiquitous) than anything other change.
Speaking of title bars, the trend to move content into the title bar is terrible for usability. There is nowhere left to click to move a window. Not to mention each app choosing a different color/style for this region. Do designers assume everyone runs 1 full screen window at a time?
And because their designs make windowed use problematic (both because of the title bar issues, and because space is used poorly in the name of "clean design"), they encourage even full screen use.
Add to that the fact that more than a few applications default to full screen at first run, and ...
Let's talk about the new Slack. The crowded title bar (with the hamburger icon, the search box, and now even the profile icon) means you can only grab it for dragging in two specific zones. Even worse, the window cannot be dragged at all when search is active, as the title bar gets completely overshadowed by the search popup. Absolute trash of an UI.
IntelliJ IDEA also fell victim to this trend of abusing the title bar, but they are be reverting the change and providing a setting after listening to the inevitable criticism: https://youtrack.jetbrains.com/issue/IDEA-219212
I think you're comparing apples and pears when you talk about Maps. There's a clear use case for scrollbars on vertical documents (i.e. most of them), and even when viewing something in an infinite scroll, so you know where you are when scrolling back up. (Although don't get me started on the UX of infinite scrolling!)
Edit: Thinking further about this, the affordance for maps is a change in the cursor to a grabbing hand. You have to learn that, but there's a clear visual difference in the cursor to show that something differnt happens there.
I got bit by infinite scrolling yesterday. I liked an article, clicked in the URL, copied and pasted it into a group chat with my team... and then noticed it was the wrong URL. The fine website had "helpfully" autoscrolled a smidge into the next article, so it updated the URL box to the next article. Lesson learned-- read before pasting...
You might resent the autoscroll on the website, but did you consider that the group chat program also has an infinite scroll, for chat history? Imagine what a hypothetical scrollbar for the group-chat program would look like if it had one, and if that scrollbar actually represented your relative position in a complete timeline going back to when the chat-group was created. :)
I have such display. It is quite spacious once you remove all decorations [1], that is no window decorations (xmonad), hide URL bar and browser tabs [2], no page scrollbars [3]. I encourage everyone to try at least browser addons.
> shitty low-end laptops that still to this day have 1366x768 displays
So users should purchase new equipment to suit designers? Or maybe - crazy idea, I know - designers should design for the equipment people actually have and not whine or look down their noses because not everyone makes designer salaries.
I'm not talking about shitty old laptops. I'm talking about shitty new laptops. There are laptops you can go into an electronics store and buy right now that have 1366x768 displays. And that's ridiculous.
Yes, these laptops are cheaper. But putting a 1080p panel in these laptops wouldn't be that much more expensive. Maybe an extra $0.50 on BOM, compared to the current design. Or, in fact, it might have exactly the same upstream cost. Why, then, use the shittier panel?
The 1366x768 display is purely there for market segmentation. It's an artificial constraint hardware makers impose on their low-spec devices, in order to make them unattractive to people with higher budgets.
A lot of these low-spec devices would be just fine for the small amounts of work many people have to do on computers, and so these people would buy them if they only needed "a little bit of" computer, even if they could afford something more expensive. (Just like you buy a $50 blender, not a $500 blender, if you're only making piña coladas.)
But the people who would consider these low-spec devices (despite having the budget for higher-spec devices as well), are pushed away from the low end, by these artificially-imposed pain points.
And that means that the people who do only have the budget for a low-spec laptop, are getting an artificially-imposed screwing, getting a shittier laptop than their money would buy them in an efficient market, purely because the supplier went to extra design effort to make their low-end products actively repel middle-end customers.
And taken in that lens, the fact that modern OSes don't work well on 1366x768 displays is actually kind of the point. It's not something the OS manufacturer can fix on their end. Because this would be a vicious cycle: if the OS began to work just fine on a 1366x768 display, then the laptop mfgrs would design their next series of low-end laptops to have even smaller displays, in order to re-introduce the market-segmenting "cramped feeling."
> where the view-controller isn't necessarily aware
That's a really poor excuse. If that "awareness" is missing then the code is structured wrong, probably because somebody cargo-culted a design pattern without ever once thinking of the user.
No? Infinite scroll is usually just a direct translation of SQL cursor semantics into the frontend. And an SQL cursor (or its equivalent for a disstributed store) is an optimal choice for a situation where:
1. there's a potentially-huge result set;
2. almost nobody ever wants to see the whole result set, but rather almost always just wants to see the first N chunks, and then drops off;
3. the results are sourced from a data lake, or from eventually-consistent geographic shards, or any other process where you need to actively gather results together with a map-reduce.
When these three factors apply, it becomes very expensive to know exactly how many results you will have, because that changes your partial streaming map-reduce workload into a complete map-reduce workload, over potentially billions of records, just to validate their inclusion and then count them.
Think "Twitter timeline." If every client displaying a Twitter timeline needed to know in advance how many tweets they could ever see, total, if they "scrolled all the way back" — then Twitter's servers would fall over from the load of calculating that number.
Another example is Google Search, which, while not "infinite scroll" on the client, is still a "map-reduced stream" on the backend. Google has put some extra effort into heuristic scheduling logic for its Search map-reduce: it grabs an initial 20-page-or-so chunk of the stream and caches it on a sticky-session node for you. This means that, if your search-result set is less than 20 pages, you get to know the actual result-set size. If it's more than 20 pages, though, Google Search reverts to exactly the same "you'll only know when you're at the end when you get there" semantics of SQL cursors. You see the first 20 pages, with a "next" arrow to go beyond them; and then it just starts counting up, and up, and up...
That excuse doesn't work at all, and you even provided the counterexample yourself. If you know the data fits on one screen, which you know very quickly, skip the scrollbar. If you don't know, add one. All you need from API is an indication of whether you're already at the end, which you probably already have so you can know whether you can/should call again. You might not know the precise number of results to size the "thumb" on the scrollbar, but that's OK. Just guess and nobody will be bothered too much. But if you don't know whether you have even one more result then your system is broken.
I'm glad I work in infra, not apps. Infra developers certainly make their own share of dumb decisions, but they're not that lazy and sloppy and user-blaming.
I’m not in infra or apps; I’m a database engineer. I write ETL pipelines.
If you know you have one more result, that is necessarily because your data pipeline actually has that result available for it to count; i.e. the result record/tuple has been loaded into the database’s memory, and the database has determined that that record/tuple is valid and fresh. (And at that point, rather than counting, the DB may as well send you that record itself. Just counting it has already required almost all of the same work!)
Remember that MVCC exists. You can’t know how much of something you have as of a given instant without doing version deduplication/application of tombstone records. This is the reason that COUNT() in Postgres takes minutes/hours on large (>1bn records) partitioned tables: you have to actually visit records, to see whether they’re still part of the current MVCC transaction-version, and therefore whether they should be contributors to the current count.
That applies whether or not you’re “counting” or actually streaming results. Given the architecture of both traditional data-warehouses — and of the map-reduce systems like Hadoop that are used to do reporting on data-lake data — you can’t know whether anything that’s in the rest of the data set is going to actually exist when you get to it. Your data warehouse might have a 100GB heap of data in a table, but everything after the first 1GB of it is dead tuples, such that after you’ve streamed the first 1GB of results, the rest of the streaming consists of the data warehouse sitting there silently for a minute or two (as it checks the liveness of those tuples) before saying “okay, nothing more, we’re done.”
And because of this, it’s not about precision. You can’t even guess. You can’t know whether you have one more result, or a billion more. Until you actually check them.
Yes, OLAP systems are different. OLAP systems operate in terms of infrequent batch inserts, giving the system time to build indices, generate counts, etc. in-between, that will all stay valid up until the time of the next batch insert. An index is, in a sense, a pre-baked answer of the “set of live tuple-versions” that a data-warehouse is holding. Count the table? Just return the size of the index. If you’ve only ever built OLAP-oriented systems, maybe it feels like these are the “simple, obvious” solutions to this problem.
But none of the things we’re talking about — global-web search engines, social-network timelines, marketplace listings — are OLAP systems. They’re OLTP. They constantly get new results in, and people expect to be able to see freshly-inserted data in the results as soon as they insert it. Data comes in at too high a rate to generate “dataset snapshots” ala ElasticSearch. The pipeline has to deal with data as it comes, doing as little to it as possible so that it can ingest it all at the ridiculous rates required, by pushing all the work of validating tuple liveness/freshness off to query-time.
And given that, OLAP properties don’t attain in such systems. It’s basically the CAP theorem at work: Consistency (and cross-shard index-building) both require time for the system to investigate itself; and you can’t get Availability (and/or cross-shard freshness) unless you run the system too fast to allow for that time.
But srsly, that's not a snide comment but rather a hint that your personal sensibility may not reflect broad sensibilities. And of course, there may not even be a single broad sensibility -- look at the political parties in the US, both the major divisions and the subdivisions within each. Sometimes, the maker of something has to make a design choice consistent with their [market success proven] direction.
> I don't really need the small sliver of menu space in PDF view to be reclaimed.
When I'm on my laptop, I do. Every inch of vertical space is precious. On my display monitor, not so much, but having the buttons reveal rather than permanent doesn't detract in the slightest.
> For anyone (cough, elderly parents)
For a long time, I have belived that these were problems that the elderly somehow could not figure out. I blamed it on them for a long time.
Then I opened Adobe After Effects for the first time and it suddenly made sense to me, the UI I found intuitive was just years of practice. How is an x in a corner a close button? What's so intuitive about swiping up to see a menu?
Another one of such thing: please try to scroll on your site using a mouse wheel from time to time, even if you're normally using a touchpad (from your laptop or an external one)!
Lot of fancy homepage with scroll animations are awful on anything that isn't scrolling as smoothly as a MacBookPro touchpad.
Oh wow, so that’s what’s making people create those utterly horrible "fancy experience" sites. I always wondered if they never use their own site. Turns out they do, on one specific device that has a bit of a special snowflake thing going on.
This highlights the bigger issue of web developers and designers only working on the latest and greatest hardware with the fastest performance and the best user experience. Their users may not have access to the same hardware, while developers and designers are totally blind to how their software might perform on anything but a recent model MacBook Pro or XPS.
"to be fair" - on Apple's site they arent controlling how the user's scroll. They aren't scroll hijacking, they're just progressing video playback based on the native scroll position.
It's still janky (and buggy?) as hell on Chrome on Windows with a mouse with a scroll wheel.
Those kinds of pages are janky on macOS with a low-sensitivity scroll wheel too. They're really satisfying on touchscreens and touchpads; I wonder if it's possible to make them smooth on devices with junky mice without implementing the equally-annoying smooth scroll hijacking that is popular on some sites.
I'm definitely in the mouse club. Using a touchpad all day long makes my fingers cramp up. It's fine for couch laptopping, but when I'm sitting at a desk I use a display, mouse, and mechanical keyboard for an overall better (and more ergonomic) experience.
I would expect devs who use macbooks to generally use them the same way as the general population generally uses them. I can't think of a plausible reason to suspect that developers specifically might prefer traditional mice.
One demographic I would expect traditional mouse usage from would be gamers. When I play a first person game I always use my trusty old intellimouse. Otherwise, I use the trackpad unless I'm simultaneously using my mechanical keyboard, but if my keyboard had an attached trackpad I'd probably be using that. But mousing in general just isn't a very important to my workflows and trackpads work just as well as mice for nearly all tasks. Arguably they work better for some since you don't have to move your hand far from the laptop's keyboard to use the trackpad, while using a dedicated mouse means taking your hand completely off the keyboard.
Using mouse buttons (both RMB and LMB) at the same time while also independently moving the cursor.
Rarely you want to hold both mouse buttons or hold RMB while also move the cursor.
Source: I can play first person perspective games quite decently on my macbook using the trackpad, but holding the mouse buttons while aiming is the hardest.
Why would I subject myself to such misery? Playing games in train.
> I can't think of a plausible reason to suspect that developers specifically might prefer traditional mice.
Neither can I, actually. I work with my MacBook plugged into a large monitor with an external keyboard and mouse 99% of the time, and just assumed everyone else did as well without really thinking about it.
> I can't think of a plausible reason to suspect that developers specifically might prefer traditional mice.
I imagine that developers would prefer mice if people who use computers intensively and knowledgeably prefer mice, and I suspect people who use computers intensively and knowledgeably do prefer mice, the same way that I believe we prefer good keyboards and good monitors.
I prefer good mice and good trackpads to poor mice and poor trackpads. But trackpads vs mice? Your comment insinuates a that a clear good/bad dichotomy exists here but I don't think that's generally the case. My above comment notes one context where I do believe that's true, but in the general case? For standard desktop interactions? I don't think mice are objectively better. If anything the proximity of a trackpad makes it better, but that's certainly debatable.
I feel like a good touchpad (meaning Apple) is superior to most good mice for general desktop usage. What pushes touchpads clearly ahead is when you make good use of multi-touch gestures. Using third-party tools under macOS, about half of my most commonly used keyboard shortcuts can be replaced by gestures that are quicker and easier than reaching for the keyboard. The handful of apps with native multitouch and support for haptic feedback through the touchpad are also really nice.
I still use a mouse for gaming, but I have to replace it every ~3 years because the buttons start to wear out. That has yet to happen to one of my Apple touchpads.
I do like and use multi-touch gestures a lot, though I think 'hot corners' and additional mouse buttons could probably fill the same roll for me. My intellimouse (P/N X08-70385) has five buttons and in the past I've mapped those two extras to forward/backward browser commands and it worked pretty well. If you can get your hands on an intellimouse, I highly recommend it; mine is somewhere around 15-20 years old with yellowed plastic but it's still going strong. Best product ever sold by Microsoft imho.
Forward and back is just the tip of the iceberg. I also have gestures for switching between tabs (sending Ctrl-Tab and Ctrl-Shift-Tab to applications), closing and un-closing tabs, opening a new tab, middle clicking, refreshing a page, scrolling to the top or bottom of the page, maximizing a window. And that's on top of the gestures provided natively by macOS.
For my day to day noodling around I use a track-pad. But for dev work I use a mouse. The click drag bits are wildly better with a mouse. For example text highlighting and moving things around in the GUI.
A thing I've seen is a macbook that spends most of the time hooked up to mouse. keyboard and two external monitors, but occasionally gets unplugged and brought to meetings or around the office in general.
It's definitely personal preference. I love touchpads and actually use a drawing tablet (with finger support) as a large touchpad when I am at my desk. I have a mouse nearby because it works better for some things but most of the time use the touchpad.
Interesting. Despite my efforts to consider that people have different preferences for almost everything in life, I am constantly surprised by my assumumptions about how other people do things that turn out to be very wrong.
It didn't even occur to me that someone who uses a computer all day long would use a touchpad, even though millions of people own one, Apple continues making them bigger, and the only thing "wrong" with them is that _I_ can't figure out how to click things on the first (or 5th) try.
My main problems with touch pads in general is that they are not smooth enough (meaning that rubbing your fingers in them for a few hours gets unpleasant), they have clicky buttons that need to much force to activate (which strains the finger you use most to click), they are too small so things like drag and drop often require intermediate steps, they are not sensitive enough (so you often need to repeat gestures, particularly 2-finger scroll; tapping almost never works the first time).
Apple’s touch pads (at least the recent ones) have none of these problems and are as smooth, responsive and reliable as good smartphones. They are actually useable as a main input device, not only in a pinch.
You are not alone. I don't even consider buying any laptop other than a ThinkPad due to the lack of TrackPoint and being forced to use a touchpad if I don't have a mouse for some reason. I always make a mess with touchpads, including Mac and non-Mac ones.
(Not your comment's parent) Apple making touchpads bigger is why I stick to their 13" 2015 MacBook air! When I got a recent Pro for work, it was nearly unusable from the oversized, oversensitive trackpad. Yes, even after setting all the options that (I was assured) would make it stop picking up unintended presses. When typing, I'd unavoidably brush the trackpad and cause the cursor to move somewhere and click, redirecting my input.
Like your comment's parent, I used the trackpad because I prefer doing everything from the keyboard and so prefer the trackpaid for things that require a mouse-like interface.
Not saying I can make this better for your use case, but I'm curious if you use tap to click? i.e. click with a simple touch. I love the big track pads and also hate the tap to click setting (I want to press to click) and don't have this problem.
Modern touchpads are significantly improved compared to the old generations (even from 5 years ago). It's not that touchpads are bad input devices (like they used to be), it's that I'm bad at touchpads.
I could probably get used to them if I had any reason to, but I have no idea where I would even put my laptop to use it comfortably with an external monitor. I like my mouse and I have no problems with it, so no sense in buying an external touchpad.
If I did a lot of work like scrubbing through audio/video or something that benefited from multitouch gestures like pinch-to-zoom, then maybe I'd consider it.
But even on a phone/tablet selecting text is a massive pain. Selecting stuff, dragging stuff, quickly flicking the cursor from one screen to the next: those are all things I assume every developer does at least a couple of times an hour. They are also things that, for me, are very clumsy when using a trackpad.
yes, that's pretty terrible, I always scroll with the mouse wheel, since I'm tool old to be rubbing a plate of glass and not feel like a moron. But this one, made my finger hurt a bit and I din't even make it half way down.
Often you can middle-click-drag to scroll. That is click the scrollwheel down like a button, hold, and you should get a scroll icon appear; move the mouse up or down (sometimes left-right as well) and it will scroll.
Some mice (Microsoft ones IME) have horribly clunky middle-click, at least the Logitech mice I normally use are softer with less travel, which I prefer.
Not all apps seem to allow middle-click on Win10. I can't recall any that didn't on KDE though.
This is broken with floating headers and footers. The scrolling goes over the whole viewport; after each pg dn-equivalent, you need to scroll back up two or three lines to see what was covered by the header/footer.
Middle click "open in new tab" breaks on lots of thing using fancy web frameworks because someone forgot to code-up support and thought "why bother with A tags. I can use divs and spans!"
Works fine on my ThinkPad in Firefox as well. But if I try to scroll with my normal mouse, it doesn't scroll smooth but a set amount per "click" of the scroll wheel. So it looks really laggy.
I guess it depends on your scrolling device and browser? Using my mouse wheel the animation feels as if it was jumping 20 frames every time I go up or down, which makes it really frustrating to read text content or even see what the animation is trying to demonstrate.
Wow, it's truly awful, even on an Apple trackpad or scroll mouse! Like a video except instead of pressing play once, you have to keep scrolling down to play. Why not just embed a video? I know I've seen this terrible pattern on other sites, too.
When I scroll down, I expect one thing to happen: content on the page moves from the bottom to the top, revealing previously un-seen content on the bottom. That's it. Please stop trying to make scrolling do something else!
That website froze my browser (Chrome) for about 5 seconds and then continued to scroll poorly. 2018 MBP with 32GB memory. I never really had a problem with apple's marketing pages previously. This one's really bad.
additionally, if your mouse is over top of the web page when you scroll, and your scrolling hovers over some item that responds to scrolling (say a text box), the page scrolling comes to a stop or stutters until the text bar scrolls.
Annoying. The only fix is to hover over the scroll bar itself (and I have macos -> system prefences -> general -> show scroll bars: always)
I wonder if the designers of the systems , after becoming so bored with the status quo, begin to fetishize the sleek hidden thing that is scroll bar right now ... like when coders are writing algorithms for say, 10-15 years I bet their code gets a lot more compact and sleek looking , almost akin to tight 3 character variable leet code solutions etc.. maybe this is just the same phenomenon tearing its head in the design world , except instead of compact code blocks its compact UIs that , to a certain metric, yes , the designer has pushed further on a certain metric ex; see more of the application window cause scroll bar is hidden, but loses sight of what is lost.
> like when coders are writing algorithms for say, 10-15 years I bet their code gets a lot more compact and sleek looking , almost akin to tight 3 character variable leet code solutions
I have never seen that to be the case unless in an environment with very heavy constraints like embedded. If anything, a competent programmer 10-15 years in would skew towards more descriptive and clear variable naming as they've been burned in the past by having to maintain some of that "leet" code.
Right on. Readability is a virtue in its own right, compactness is not.
'Cute' code that leverages language features in creative and surprising ways, generally belongs on code golf competitions, not production systems. If your code is so 'clever' that only you can understand it, that means you're a bad programmer, not a good one. (That's not to say you should avoid making appropriate use of advanced language features for fear of ignorant readers, though. That's another matter.)
Ada was far ahead of the game here, explicitly prioritising readability over writeability, in its language design.
With all that said, an experienced programmer may feel less need to write comments, as their own familiarity with the problem-domain and with the language will be well developed.
> when coders are writing algorithms for say, 10-15 years I bet their code gets a lot more compact and sleek looking
If those coders are like me their code becomes more verbose, less clever, more obvious, less dependent on the quirks of the programming language I use. The reason is that it's easier for me and the other developers in the team to read and understand what it does.
On my way to here I really hated many geeky clever and brilliant pieces of code I found in projects I inherited. They made me and my customers lose days at decoding all that brilliance.
> coders are writing algorithms for say, 10-15 years I bet their code gets a lot more compact and sleek looking , almost akin to tight 3 character variable leet code solutions etc..
What are you talking about? I've been programming a long time, and clean, readable code with clear, descriptive variable names gives me a stiffy. It suggests a more meticulous mind than mine wrote it.
You know who writes the worst Gordian-knot code? Engineers from other disciplines who learned programming as a means to do other engineering calculation. Smart guys, but it's a pain keeping track of what i1, i2, i3, and i4 mean, or their 1970s FORTRAN program flow.
Infinite scroll often make it nearly impossible to click on the site's privacy policy or terms of service in the page footer without going into dev tools. Whenever I come across that issue I wonder whether it's a feature or a bug.
Yes, they also have the problem that they (are usually implemented in such a way that they) break the back button[1] and make it so you can't easily jump to a well-defined block of the results you know you haven't seen.
I also see them frequently have a bug where they'll just keep cycling through the same first page-equivalent of results.
[1] If you click a result and then hit back, it usually just reloads the page and starts you from the beginning. I think the function of a back button changed at some point. It used to be that "back" was always instant and took you to the exact state of the page before you clicked a link. Now, there's a huge lag on all but the most minimal sites (like HN) and some reload of something gets triggered.
Not to mention the total lack of affordances for "I want to see this something from 2 years ago" in most implementations. Usually even the back end works on a "last ID" cursor system, so even savvy users can't jump back. Holding the end key for 10 minutes is not fun.
Yeah, I (perhaps because of masochism) click on links to Quora answers I see in my email digest, and it will me to the answer within an infiniscroll page. If I follow any link and click back ... the answer is gone. I have to go back to my email again and get it from there if I want to see it again.
It almost seems like, beyond a very low threshold, UX gets worse as you throw more engineers at it.
> Yeah, I (perhaps because of masochism) click on links to Quora answers I see in my email digest
Hey, it's better than clicking through the Medium digest and getting your 10th consecutive "you can use a switch statement instead of a bunch of if statements" article written by someone in week 2 of their bootcamp. :(
I've tried to get to a contact link in the footer and been hindered by infinite scrolling on more than one occasion. I wouldn't say it happens a lot, but when it does it's frustrating.
> I don't even understand why they would remove them? Who ever complained of a scroll bar?
IMO it's all part of this push to combine mobile and desktop UI. On mobile there is a legitimate case to be made for removing them because screen real estate is so precious. But desktop gets clobbered as an unwanted side effect.
I actually think it has more to do with the increasing size and sophistication of the touchpad in Macs. There is no need for a scroll bar when you have a touchpad you can flick (likewise for magic mice).
Yea on macOS, the scroll bar is less of a scroll bar, and more of a scroll indicator. There just isn’t any reason to click on the scroll bar if you have a trackpad.
Notably, when you connect a traditional mouse, the scroll bar automatically becomes thicker (easier to click) and always visible, as mice users do have a reason to click/drag the scroll bar. Seems like Apple had this in mind
> Yea on macOS, the scroll bar is less of a scroll bar, and more of a scroll indicator. There just isn’t any reason to click on the scroll bar if you have a trackpad.
I've seen so many web pages whose "fold line" just aligned so neatly with the end of my viewport that without the scrollbar being visible I would've just closed the page, being disappointed that it's just a meaningless hero page with no content.
Similarly, if stuff like selection boxes align just right, you simply can't see there are more options without a scrollbar. It just looks like you only have those options that are on the screen right now.
I love pages that have CSS to make the first section precisely the size of the viewport…and then have some sort of arrow indicating that you should scroll down to see more content.
I don't know, not everybody has a macbook or a magic mouse. Plus even old, small, unprecise PC touchpads were enough to scroll a website.
To me, it looks more related to the recent fashion of "minimal" interfaces. Like material design and ultra-skinny fonts. Could also be related to the "infinite scroll" some sites have. A scroll bar doesn't make much sense for those.
I use the larger touchpad rather than a mouse when working at my desk, and I have to say that's been a non-issue—for me anyway. A slight touch/movement on the pad and I know exactly where I am (and the device really is just great...).
That said, the issues outlined in this article are glaring, and I do notice them when I switch to PC/Linux and a mouse and back. It's definitely something easy to forget, though.
The UX is designed to make most of the obviousness fade away and make the process an extension of your natural movements. Kind of like how a good band can take cues from each other without explicitly speaking or reading off of a sheet during a performance. But that doesn't help people who aren't adept professionals, or people operating under completely different circumstances.
I've hat as programmer many times this kind of discussion, about many features. And the lesson I learned goes like this: if somebody has an issue with something, it means they have an issue with something. Telling them "I don't have the issue" does not make their issue go away. In this particular case maybe they really want to see where they are just by looking - like the commenters above just said. That's pretty natural for them, we must agree.
On a long page, it's really annoying and for some users not possible to flick scroll a long way. On desktop at least I can hit home or end. Removal of the scrollbar is arguably more user hostile on mobile than it is on desktop :/
The touchpad is related to mobile too though; trying to close the gap between computer interface and mobile touch interfaces. I don't think we'd have a touchpad if we didn't have mobile touchscreens first. And recall how Apple switched the "scroll direction" on the touchpad so you move your fingers in the same manner as you would directly on a screen to produce a given scroll direction.
The idea is to focus on the content and have the interface elements “fade away”. The problem is that you often need those interface elements to properly interact with the content.
I tried that setting before and it doesn't seem to do a thing when you have both types of devices connected.
I keep a touchpad on the left and a mouse on the right (had the extra hardware then got used to the config), plus the one on the laptop itself, and they're hidden with the "Auto" setting.
> Alternatively, you can set the scrollbars to be visible at all times by setting System Preferences -> General -> Show scroll bars to Always.
Hidden scrollbars recently caused me an extra day of work. I was documenting all options in hundreds of html select boxes (dropdowns) on a legacy product being rewritten.
Many of these dropdowns were vertically scrollable, but macOS using Chrome did not display a scrollbar. I had no idea they were scrollable and missed many options in my documenta3.
I could not look at the html for it as it was minimized and not easily searchable.
Here's a tip you can try next time you have a problem like this. Maybe your task was somehow resistant to this, in which case I really do understand your pain and am not trying to paper it over, but just in case this could benefit you or folks like you:
In Chrome:
1) Right click on element you want to grab all of the contents of, and select Inspect Element. NOTE: If this element is the sort that isn't properly placed in the DOM and/or disappears when focus/mouse leaves, you can freeze the current DOM...but the right way to do that will depend on your context.
2) In Elements tab in Dev tools, the element should now be highlighted. You may need a parent or child element. But hopefully you can find it nearby and verify you have the right one with the Inspect tooling.
3) Right click on the element that has all your data in the DOM. Select Copy. Sometimes it will be enough to just Copy Element and paste it somewhere where you can start organizing the data. But for your task, it sounds like you might want to write a script for this so you can do it across multiple Elements. So, try Copy > Copy JS Path.
4) You now have access to the DOM element, even if it was created dynamically by JS and exists in the Shadow DOM. Try "paste" in the Console tab of Chrome DevTools.
5) The thing you grabbed might not be exactly the right one. Navigate down to the level you want to iterate across with the "children" attribute. Example:
> 1) Right click on element you want to grab all of the contents of, and select Inspect Element. NOTE: If this element is the sort that isn't properly placed in the DOM and/or disappears when focus/mouse leaves, you can freeze the current DOM...but the right way to do that will depend on your context.
Just to extend a touch—I use CMD+Shift+C or CTRL+Shift+C countless times in a day while debugging, or sometimes for this very reason. The shortcut is the same across browsers—at least Chrome/Edge/FF/Safari.
Instead of right clicking, it'll give you a "target" for selecting an element and highlight it before clicking.
In the console the selected element will be immediately accessible by `$0`.
Strongly related: viewport units (vw, vh, vmin, vmax) are fundamentally and irreconcilably broken if your document has scrollbars, because they include the size of document scrollbars, so that 100vw is equal to 100% + 17px if you have a 17px wide scrollbar there (the most likely value on Windows), so now all of a sudden you have a horizontal scrollbar too. Or your nicely calculated layout that thought that 33vw + 33vw + 33vw < 100% is now wrong and wrapping the third block on screens less than 1700px wide. There used to be a weird way of opting out of scrollbar inclusion (I think it involved `overflow: scroll` and something else on the root element, I think, but I don’t remember the exact incantation), but Firefox was the only browser that implemented it, and no one else wanted to (they said “no one wants this”—untrue, I say—“and it’s weird and inconsistent”—which was true).
We need a new set of units that excludes document scrollbars. Or a constant like env(scrollbar-width) that represents the scrollbar width so that you can subtract it yourself, which would be useful in a few other places as well (instead I’ve done the likes of `var(--scrollbar-width, 20px)` and calculate and define --scrollbar-width on the root element in JavaScript).
I dealt with this just a few days ago. A client's design was mocked in webflow and it decided to use 'static' for the header at 100vw. Which meant that on browsers that hide the scroll bar, the page jiggles left and right every time you scroll.
That was fun to fix because I had to rebuild the entire header since a ton of other webflow-generated CSS depended on that thing being static.
I’ve encountered this last year when I took over a project with a dev staff solely using Chrome on macOS.
The thing was: We catered for gamers which means Chrome or Firefox on Windows was the norm. We got a lot of bug reports like “hideous scrollbars in shop item description” or “hideous scrollbars in menu” where the devs were puzzled about the bug reports.
Yes! Far too many developers now only check that the page renders nicely on Chrome on their expensive macbook. While most of their users may be having a less than stellar screen and don't see the contrasts, run old hardware and a different OS+browser combo.
A couple years ago I actually had to buy a new monitor because everyone decided their login UI was ugly and needed to be ultra low contrast. I had to ctrl+a the page to find the text fields. My eyesight isn't particularly bad, just a minor astigmatism.
Kind of an off-topic but recently I noticed capturing a gameplay video of an HDR-enabled video game with any kind of software (OBS, Twitch Studio, GeForce Experience, FRAPS, Discord) also produces terrible results. This happens even if I capture the entire monitor or just the app window. Both OpenGL and DirectX games suffer from this.
How one can tell whether what you capture/develop is what you will see without testing it on a second monitor? What is the subject I must do research on to learn more about this?
I'm not just talking about color spaces, but also how these multiple software communicate image data in a lossless way, how window modes (fullscreen, borderless, windowed) affect these flows, where HDR stands in this.
That's color management. A surprisingly fucky area from an application developer's perspective, because pretty much only DirectX 11/12 and Vulkan even have that concept -- and both DXGI and Vulkan are pretty limited [1]. Obviously, HDR and non-sRGB color spaces only work windowed in Windows, because DWM does scRGB, everywhere else your app needs to be in exclusive fullscreen. Note that Wayland on Linux doesn't support exclusive fullscreen, and since Wayland only does 8-bit sRGB, nothing on Wayland can use anything other than 8-bit sRGB. This might be fixed eventually (guess it'll take about as long as it took for GIMP).
As a general rule of thumb, if it has anything to do with colors and comes from either computer or photography people, there's a solid 99 % chance it's broken or doesn't even know what color is. If you want to know how it's done right, you gotta look at how the "moving photos" people do it.
(Note: "everything" for me means "everything that's not Apple", because I cba to care about those snowflakes)
[1] Yes, yes, OpenGL has sRGB types for framebuffers, which doesn't work on half the devices in the wild (on the other half it applies the sRGB gamma function to linear sRGB data) and isn't meaningful anyway, because we _don't_ want to use sRGB. DXGI only does Rec.709 and Rec.2020, no DCI P3, and it also treats everything that's not HDR as sRGB.
I need to use a MacBook as my work machine (for a little over three years now), and generally have a strong dislike of the platform. It's just similar enough to Linux to lull me into believing that I kinda know what I'm doing, but just different enough to make me feel incompetent whenever I try to actually do anything nontrivial.
System Preferences -> General -> Show scroll bars to Always.
I stumbled across this little gem about a year ago, and has been one of the bigger quality-of-life improvements I've found on this machine.
Or Control, depending on which you're used to. Every time I try to use Windows I'm surprised that there's no easy-to-access setting for remapping Caps Lock like there is on every Linux DE and macOS.
Increasingly many sites are opting for overlay scrollbars that are exceedingly difficult to actually use. It's almost as if they don't want you to use the scrollbar yet provide one for compliance. They are often very thin (just a few pixels) and coloured dark gray upon black background and won't expand to a bigger size until you successfully squint and hunt it down and precisely position the pointer over it.
That's because the "designers" don't give a shit about the users' actual needs. They just want the thing to be pretty. They'll also fuck with it every once in a while, making random pointless changes, just to keep the look "fresh". Way too many artistes can easily sneak onto your UX staff.
Also watch out for people from marketing and the like who put pressure on the UX people to do that sort of thing. People who don't actually know how to make anything work, and don't have to actually use it to get work done, tend to be obsessed with how it looks.
A few casual users who don't actually need to get anything done will usually support this kind of time-wasting idiocy, and the offenders will point to their feedback for validation. Unfortunately people who write reviews are often in that category, since they rarely make practical use of what they're reviewing.
...and thanks to some apps doing away with window borders completely, as well as everything becoming some drab grey colour, many times I've mistakenly clicked in and activated a different window while trying to manipulate a nearly-invisible "automatically expanding" scrollbar.
Contrast is issue I've found to be annoying, as though UI designers don't really want you to spot it easily and to show the information it conveys. On the windows side I think it's after winxp that they moved away from having color to greys, and then later to solid blocks
While folks are rightfully bashing macOS’s hidden scroll bars, I would like to advertise an iOS feature that is actually pro-scroll bar. As of iOS 13, you can grab the scroll bar to scroll at hyper speed rather than repeatedly flicking the screen. Not sure if there’s a proper name for this feature but it’s explained here https://www.idownloadblog.com/2019/08/05/scroll-faster-iphon...
I'm being serious. The first iPhones were years behind comparable desktop hardware of the time. It's only recently that iPhones have caught up and surpassed them.
I've taken to using a fling motion on my Android devices since grabbing the position indicator in a scrollbar is comparatively inconvenient... but performance is fine.
> If you are a front-end developer that uses macOS, I kindly ask you to dedicate a little more attention to how the websites you create behave on platforms other than your own.
I'd take that one step farther - no matter what platform you code on, test your UX on all the others.
The worst are sites which get broken every other year (each time they are redesigned/rewritten) whereas they provide nothing more, feature-wise, than what they provided 15 years ago. And yet it forces you to update your browser to one that provides the latest compatibility with the latest Chrome "feature".
> Alternatively, you can set the scrollbars to be visible at all times by setting System Preferences -> General -> Show scroll bars to Always.
One of the best things I think I've done for quality purposes is require all developers, QA team members, and project managers have scrollbars turned on at all times. It's sent the number of sites with hidden overflow in production down to near-zero for us. Someone, somewhere down the line will end up seeing that your page is 300px wider than the viewport before it hits production. And because that usually indicates other problems (often related to accessibility), it usually has a domino effect of discovering other issues that need to be fixed.
It has definitely made a marked improvement to the quality of our work, and I recommend everyone require their teams using Macs to require scrollbars be set to "Always"
If you read between the lines, you should understand that the root cause of issue is the macOS developer monoculture.
All you need is one Windows or Linux-based developer in your team to catch those kind of issues. But so many dev teams are macOS-only those days, whereas, apart from US and 2-3 other rich countries, 80%+ or even more of actual desktop users are Windows-based.
(There are many other problems with macOS monoculture, for example, the spread of ultrathin fonts that look nice and crisp on retina macbook but result in very low contrast rendering on Windows).
US-based devs: outside of your country (that is, >95% of humanity), most people do not have a Mac, an iPhone, nor use MM/DD/YYYY dates and 12-hour clock.
People in this bubble overestimate the popularity of anything Apple.
I once had a discussion with a coworker at a major company known for its app, where I was complaining about an Android bug and he genuinely asked "does anybody even use Android?" And I responded, "only the majority of users in every country around the world including the US. It's even 50/50 on our app!" Afterward, I found the internal numbers for employee use of the app was 96/4 in favor of iOS, and this included employees outside of tech and outside the US!
He, like so many others, was totally oblivious living in the Bay Area tech bubble.
If you design your site/app in the Applethink-bubble, then maybe you're missing out on quite a lot of market share if the majority of people with devices find your Applethink site/app difficult to use.
Is it a surprise that iOS users are more engaged with your product when it's tailor-made for them while Android users are getting a sub-par experience?
IOS has much higher penetration when it comes to app installs. I’ve seen this both at a fintexh/challenger bank and multiple e-commerce companies. At the e-commerce company I am currently at, 80% of our traffic is IOS. IOS users are also higher value.
This is a single aspect of something i call the bay-area-bubble.
other aspects include to assume everyone have free wifi everywhere or unlimited latest-G data on their phone, last model Macs and phones, disposable income for 57 $9.99 monthly subscriptions, and live on food delivery.
Soon to include: everyone have a room fully set up for VR, with the latest headset du jour.
I think the macOS monoculture is going to go away in the next few years because on one side macOS is becoming more and more developer hostile with each release and on the other side WSL (and WSL2) is improving the Windows developer experience a lot.
I use Windows as my daily driver OS now for both native/web/mobile/devops work and I expect to see more and more developers making that same switch in the coming years.
For folks considering the switch, I highly recommend considering linux as well. I've used both macOS and ubuntu as a development environment, and bug wise they are on par, but linux really excels at having a true "native" development environment. Docker without a weird VM, true bash, etc.
As I remember it, this is the whole reason OS X became the de facto standard for developers in the first place, consumer-grade windowing and device drivers coupled with a familiar development environment.
macOS has a 12+ year old bash. Also, even with Homebrew patching up Apple's neglect, coreutils behaves differently under a Darwin kernel than Linux. A repeat source of aggravation for myself is the behavior of 'ps'.
that's not due to the kernel, that's because it's bsd ps instead of gnu ps. sed is the one that gives me the most irritation, there's basically no way to write useful cross-OS sed scripts.
To suggest that “bug wise“ it is the same between Mac and Linux is a bit of a stretch. I work in a team where the developers are mostly split between Mac and Ubuntu. Guess which ones often have problems joining zoom meetings or attaching their machines to a projector for a presentation? There are a lot of things that you generally don’t have to worry about much if you’re using a Mac. I’m not gonna say that one of these is better than the other, but to suggest they have equivalency wrt common scenarios is a bit strange.
Also, Mac is a certified UNIX environment, it is native.
I think the corollary is "consider Ubuntu, but still don't skimp on hardware". Buy a proper laptop that ships with Ubuntu, buy a proper connector cable if you need one, etc.
I think what he is saying is, don't buy a cheap-ass Dell that runs Windows 10S, then expect to be development powerhouse where all the hardware works properly. He is suggesting buying a developer-oriented laptop that is designed and supported to be running Linux. You'll have less weird hardware quirks that way.
You can buy a $1000 laptop and still run into issues where your computer won't connect to a projector or a major application refuses to support your platform.
Yep, that's exactly what I meant - and peripherals too. I'm sure a big reason Apple products don't have problems video calling or connecting to projectors is high-end hardware, and I don't think the difference between MacOS and Ubuntu on comparable hardware is anywhere near as striking as their reputations would have you believe.
I really, highly doubt that. Autodetecting and using proper settings for a newly-discovered display device (projector) is fundamentally a software consideration, not hardware: the hardware into which you connect the display is capable of talking to it (maybe not at ideal resolution depending on specs, but whatever), but the OS needs to discover, detect, and configure the new connection properly.
The same is true for Zoom: things like accelerated video streaming are universally supported on laptop GPUs, but software support is spotty for some apps/OSes. Things like screen sharing are 100% software-side.
While I'd love to use a Linux workstation (and often do), it's simply not there yet in those areas. That has nothing to do with "high-end hardware" and everything to do with less robust software support than many alternatives--including MacOS.
I guess you could make the case that because MacOS has to support fewer kinds of hardware, they can spend more time on making software support robust, but I'm neither convinced of that argument nor convinced that's what you meant by your post.
You'd think that, but once I replaced my €5 AliExpress connector cable by a proper one, suddenly I haven't had any issues with projectors any more.
Likewise, if you use an underpowered laptop, or one for which Ubuntu doesn't have proper drivers, you're going to have a hard time using Zoom.
I'm not saying that you'll never have issues (which I'm sure holds for MacOS as well), or even that you might not have slightly more issues than on MacOS (see your last paragraph), but things are definitely not as bad as comments online would have you believe, because many of those can be ascribed to issues like I mentioned above.
> Guess which ones often have problems joining zoom meetings
As a linux user, I'd like to say that I've seen that happen just as often with OSX users on my team. I think that's just Zoom's shitty app. But I do agree about peripheral hardware issues plaguing Linux. Bluetooth headphones are still such a hassle on Ubuntu 20.04.
Bluetooth has never not been a hassle in my experience - and that goes for Android, Windows, Linux, OS X, iOS. There’s always something with the device or driver.
This goes especially for audio.
I’m going to guess that Apple peripherals + Apple OS on Apple hardware is actually hassle-free but then what’s the point of a standard.
This is definitely the largest drawback of Linux in general - support for peripherals and common communication/presentation software is really lacking. If I had a dollar for every time I had to muck around to get my headphones to work on Ubuntu, I wouldn't be posting here.
> support for peripherals and common communication/presentation software is really lacking
Well if peripherals vendor and "common communication/presentation software" didn't test on Linux, whose drawback is it now?
That being said, I have used the following without any trouble on Linux: Slack, Discord, Teamviewer, Zoom, BBB and BlueJeans. Vendors don't put enough investment into testing for Linux due to comparatively low usage numbers - but that doesn't mean that it is a "drawback" of the OS.
As for hardware, as Vinnl explains above the way to go is buy certified hardware. Ubuntu and Red Hat folks certify a bunch of Dell and Lenovo laptops to work with Linux. I use Ubuntu on a Dell XPS 13 and I don't need to do ANY extra setup. Things just work (TM) - including my Sennheiser and Jabra headphones.
Seems like a trade-off based on my experiences. On Macs you'll get a less bugs in proprietary, user-oriented apps (like Zoom). On Linux you'll get a less bugs on free software apps, developer tooling and server side apps. Though I think it mostly comes down to where you spend most of your time and what you are working on.
I feel like GIMP and Inkscape have finally caught up to the Adobe offerings. It was my reason for not switching for so long. But over the past year I did and I just-- wow. GIMP in particular is just really good now.
I think WSL2 is wonderful, I use it on my primary development machine. Docker doesn't have as many pain points as it used to, and I generally find that I can get around the OS without too much issue, and the new Windows Terminal App helps a ton (its no iTerm2, but its better than anything else I can find)
I still miss macOS, so much, because
- Mac Software I was accustomed to using (and it was lot) is no longer available to me (don't discount this problem when suggesting switching)
- I had to customize Windows 10 a lot, (many hours of work) to emulate native features of macOS I personally can't live without (including remapping keys, which turns out not to be trivial on Windows)
- I miss the general UI polish of macOS. Windows is...ugly in a lot of places.
- I miss Finder, and I've tried so many file explorer alternatives and nothing comes close
- Windows 10 can't seem to ever remember my screen layout for my applications. I had to resort to an application to do this for me. Drives me nuts
- macOS has way better HiDPI support (I have 3 4K monitors)
It ain't trivial, is what I'm saying.
Don't get me started on Linux, I loved Pop OS conceptually, but the support for basic things, like natural scrolling, was completely lacking, and I had to resort to all kinds of hacks to get a desktop semi functional. Not to mention, a lot of Linux 'add on' packages are abandonware and would break constantly when updating to new versions, and for basic things, like dealing with scroll direction, I had to edit files in a terminal, which I just found annoying. It just wasn't a polished out of the box experience. Don't get me started with it not recognizing adapters properly and issues with GPUs. macOS is alot of things, but they take 'just works' more seriously than any other operating system I've ever used.
Honestly, I find it incredibly odd that companies that have more manpower than Apple (you'd be shocked how little manpower they devote to some things) can't manage to pull off the polish of macOS UI
If you didn't buy into Finder and other built-in apps on macOS, it can be easier to see less value, that is true, but effectually I feel like that dismisses perfectly reasonable expectations a macOS user has about using a platform. Frankly, Apple hasn't given as much love to macOS over the last 5-6 years in particular as they do their other platforms (in some ways, justifiable, otherwise, its frustrating) however, even with their basically trickle of updates to the desktop platform in that time, no other desktop platform could even match them on baseline functionality and user experience. Even though I'm certain Microsoft has more people working on Windows 10 than Apple has working on macOS. Even Ubuntu with its open source contributors likely dwarfs Apple's resources on this, yet nobody comes close to matching the user experience to me.
As for software, I miss Finder because I felt the interface was intuitive and I really liked the features Finder like tags, smart folders, and the way you could custom it with extensions and settings. I like the built in software, like Preview in particular.
I have yet to have anyone show me an OS that comes close to matching macOS out of the box. Even conceding customizations, they're just so lacking in comparison, to someone who 'clicked' with macOS.
Granted, not everyone likes macOS, and thats fine, but to come at it from a place that its so easy to migrate to another platform, ignores so many things about macOS that make it great to a macOS user.
My job requires Windows 10, thats how I ended up on the platform. I learned to get around and replicate as much of the features as I could, but so many are just fundamentally missing and even if I wanted to pay for the functionality (and for so much of it I would), the software doesn't even exist.
As always, it's all about the tooling. For better or worse, more is moving to web/electron, but for teams using Sketch, good luck trying to open Sketch file on Windows.
After getting a Windows gaming PC, I've been spending more and more of my time, developing and otherwise, on PC. I prefer it because it's where I have my large (low DPI :( ) monitor, but Windows is still full of so many papercuts in general usage (I miss being able to drag a file into a Open File dialog so much) that I still greatly prefer using macOS.
Windows has some niceties to try and make the transition better - The Windows Terminal app is pheonomial, and Powershell tries to map rm to whatever the Powershell command is - rm -rf doesnt work as expected though. I honestly don't find myself using WSL all that much - I only use it when I can't figure out how to make ping go forever in Powershell.
That's WSL. It's still there, and it's super easy to use (new Terminal tab > Ubuntu), but I just find I don't have much of a need for it. I do web development with Node, which runs fine under windows. https://imgur.com/a/n19qb8M
High density hasn't been an issue for me lately on Ubuntu, I rarely use a touchpad, so I can't comment on that. GPU support is absolutely garbage on Mac OS in comparison, both from a compatability and performance perspective.
Anyone who has used any Linux knows HiDPI or mixed DPI monitors are a massive issue on Linux. Even when you get your xrandr configured perfectly fine, disconnecting is a hassle. And that's completely ignoring apps, QT, GTK, etc all handle DPI differently. So you need a different config with DPI setting on each.
One of the best I've used is PopOS with the HiDPI daemon enabled. But even that has issues.
Even Windows can't deal properly with having a 4k laptop screen and a 2k monitor attached. Have to deal with weird text scaling issues and other weird scaling issues.
Agreed. I grew up in the 90s using Windows. Bought my first MBP in 2009 and within a few years was using Macs almost entirely. Started using a Windows machine as my primary earlier this year for a new job and now I rarely touch my MBP. Took a little time to get used to Windows again, but I'm quite content. Windows 10 is excellent, and so is the ThinkPad T480s I'm using.
The most obnoxious I've seen of US-centrism shining through is a registration page - forgot where - where you could select country (optional) and US state (required, no "none")
All you need is a round of testing for cross-browser compatibility. It's a pretty much solved problem by now, doesn't take that much time and can be automated. There is nothing stopping every developer from doing it - besides the usual pressure to deliver features in place of proper engineering.
> All you need is a round of testing for cross-browser compatibility. It's a pretty much solved problem by now, doesn't take that much time and can be automated.
Please show me a solved automated test for the issue in the post.
There are several online cross-platform browser testing services. You don't even need a non-mac yourself. If the company is large enough to alwayus test locally, most modern testers will use VMs or some sort of containerization for repeatable determinism. This is absolutely a solved problem.
Online isn't real-life. You can test in a VM, but it will never be the same as testing on actual hardware.
Even my craptastic company gives me a bunch of devices to test on. And I very deliberately specify very low-end versions of the machines so that I can test against worst-case-scenarios. It's a bonus that sub-optimal hardware is cheap as chips.
My favorite testing device is a $20 burner phone I picked up in the supermarket checkout aisle. If the web site works on that piece of poo, it'll work on anything.
Right now I'm waiting for UPS to deliver an old iPad from Alaska that the IT department bought for me off of fleaBay, just so that I can test on sub-optimal actual hardware.
> Online isn't real-life. You can test in a VM, but it will never be the same as testing on actual hardware.
Testing platforms like SauceLabs, Browserstack etc run on real hardware, even for mobile.
Testing with your own devices is of course better, but a lot more work. Which one you choose doesn't really matter, the point is just that you don't _need_ to have all those resources to do basic compatibility testing, so no excuses.
> Online isn't real-life. You can test in a VM, but it will never be the same as testing on actual hardware.
Ths context is that testing is a solved problem. You are taking specific factual examples I wrote and attempting to debunk them with a general statement that is in agreement with what I wrote. Testing scrollbards can absolutely be tested inside a VM. Pretending that native hardware is needed to cover 99% of the use-cases for scrollbars is kind of silly.
> Even my craptastic company gives me a bunch of devices to test on. And I very deliberately specify very low-end versions of the machines so that I can test against worst-case-scenarios. It's a bonus that sub-optimal hardware is cheap as chips.
> Ths context is that testing is a solved problem.
The only problem that is solved is running multiple browsers in VMs in a cloud. Everything else is anyone's guess.
To come up with an automated test for the problem in the OP you have to be a) aware of the problem and b) have a way to test that scrollbars do/don't appear. Good luck with that.
> The only problem that is solved is running multiple browsers in VMs in a cloud. Everything else is anyone's guess.
That false hyperbole. Just because you do not know something, that doesn't mean that nobody else does.
> To come up with an automated test for the problem in the OP you have to be a) aware of the problem and b) have a way to test that scrollbars do/don't appear. Good luck with that.
No luck is needed, just tests. You may not realize, but test frameworks already exist for testing the screen for the presence of UI elements. That is how dialog boxes are clicked in tests. This is literally solved.
Honestly, it seems like you are playing a game. Did you perform a search of tools that can test ui? There are lots of them. I find it hard to believe that you think there aren't any. That's like saying that compilers don't compile C code anymore, because you only compile C++.
I see our testers doing this all the time - but I don't know what libraries they use. I know the testers at our shop do it with an house built framework as well as with commercial ones. On our company's security team, various people have scripted UI exploits using selenium or appium. I just asked one of the people familiar with those two tools I mentioned, and she said it would take her a few minutes.
These are commodity tools. There is nothing special with any of them.
> Honestly, it seems like you are playing a game. Did you perform a search of tools that can test ui? There are lots of them.
It would'be been so easy to just answer the question I asked. Instead, it's now a thread of non-answers and thinly veiled ad-hominem attacks.
The rest of your long answer is once again working hard on avoiding the answer.
> I find it hard to believe that you think there aren't any.
I didn't say there weren't any. I asked, "which ones let you test the problem in OP".
> I see our testers doing this all the time - but I don't know what libraries they use
Ah, my assumption that you don't know what you're talking about is proven correct.
> On our company's security team, various people have scripted UI exploits using selenium or appium
Question: scrollbars.
"Answers": tools click on buttons, scripting security exploits, I don't know what libraries testers are using.
Ignorance is bliss, isn't it?
> and she said it would take her a few minutes.
Given your answers in this thread, I seriously doubt your ability to ask a proper question to "the people familiar with the tools". Especially given the fact that you don't know what libraries testers use and that you, apparently, don't do any testing yourself.
See, frontend testing is very far from being "a solved problem". Especially for quirks as described in OP. But you wouldn't know because you, well, don't know.
> These are commodity tools. There is nothing special with any of them.
Indeed they are. Indeed there is nothing special. And this still doesn't answer the question.
> Except that I did answer as to specific toolkits - with two answers for two separate kits
You din't, really. On the third attempt you said "I see our testers doing this all the time - but I don't know what libraries they use" and "various people have scripted UI exploits using selenium or appium. I just asked one of the people familiar with those two tools"
This shows that you started this entire argument with very bad faith. You berated me for not knowing something while you yourself:
- don't do frontend testing using frontend testing tools
- you don't know what tools your testers use
- you assume some capability of some tools only because "you talked to people familiar with them" which further shows that you yourself are not familiar with them.
On the other hand, unlike you, I know what I'm talking about. Granted, I haven't used these tools extensively, but I did use them, and I'm well aware of their limitations.
> You din't, really. On the third attempt you said "I see our testers doing this all the time - but I don't know what libraries they use" and "various people have scripted UI exploits using selenium or appium. I just asked one of the people familiar with those two tools"
Because I see this sort of test being done almost daily... I work in security and not specifically in testing. I mentioned what frameworks are used in a security context. You are beating a dead horse, focusing on what doesn't matter. I suggest you get off hacker news, go educate yourself, and stop pontificating.
When I said that I "just asked" I meant that I just now asked, not that just asking was all I have ever done. You're being foolish, projecting in every comment that you know more than everyone else.
PS. Please quote properly. I wrote that the security team uses selenium and appium not whatever it is that you are pretending I wrote.
A person who doesn't do testing himself, doesn't know which libraries testers use, and has to ask other people how to solve the problem in OP (because he himself doesn't know) is telling me to stop pontificating and go educate myself.
Good luck with your holier than though attitude, and god help your security.
What is wrong with you? I never said any of that. You keep cherry picking half-statements and then taking them out of context.
I did not need to ask other people how to solve a problem. That is a silly interpretation of what I wrote. You are willfully pretending ignorance of how to read and have a conversation.
Pontificate: "express one's opinions in a way considered annoyingly pompous and dogmatic." Yes, you really do need to stop pontificating.
If you knew the answer, you would've already answered this. Instead, you keep giving non-answers until you admit that you don't test yourself, and that other people use some libraries, you don't know which.
I did - with two specific answers. Then I went and asked a security tester and told you the results of that question.
You can keep using all the words and comments that you want, but the simple truth is that this is a solved problem, I gave you two solutions, and yet you keep using ad hominem attacks.
As you yourself wrote, "Ignorance is bliss". Those are your words, not mine.
Another thing I keep bumping into is designers assuming a nice 16:10 display, and wondering why when the final product is opened up on a 1366x768 shitbox barely any actual content is visible.
Funny. I wonder when this (perception?) shift occurred. 15 years ago, it was distinctly the perception that Mac users were the tech-illiterate ones. Have all the tech-illiterate users abandoned Apple, or have they just been drowned out by the bay area macOS dev culture?
In my experience in a graphics lab, the perception was that Macs (thought they often wrote that in all caps) “couldn’t do anything.” Specifically, having a one button mouse made them unusable. I never had trouble being productive and greatly enjoyed the old Macs, but I think it was just a different way of looking at computers.
Now that macOS is every bit as complicated as windows, it’s often Microsoft’s OS that can’t do something. WSL helps some of that, but there are usability shortcuts that Windows doesn’t have. Windows is also insultingly condescending with its verbiage (“Getting Windows Ready” or “Working on Updates”). Maybe aiming low in their explanations is their one-button-mouse moment.
I don’t think the users were ever all tech-illiterate on either platform. Only perceptions have changed.
Scroll bars have always been a compromise. Today, I have a 4K monitor connected to a 2K laptop with a USB-C port for yet another big monitor or more. I can span a window across 3 big monitors, and still the horizontal scroll bar appears because of the platform agnostic attitude of "good enough."
Up until Android and iOS mostly phones and tablets, we had a monoculture Intel and Windows. Windows is still almost 90% of actual laptop/desktop users. [1]
I am an ex-Mac developer. I do not have a modern Mac, I do not have an iPhone, and I set the date/time format to pretty much whatever I want on Linux, Windows, and 15 year old Mac OS X. And, I just have to deal with broken date/time interfaces on the web. For far too long, I would go along with the Intel / Windows developer monoculture of "good enough" software, and comments like Mac? Linux? What's that?
Our company offered me a choice between a Mac and a PC, but also said that the former is heavily recommended so I picked that option. I didn't want to be the guy who constantly cries for help because stuff is just not designed and never tested on windows.
Don't forget the constant peer pressure and lame jokes from your coworkers! Oh and if you're a Mac newbie, be prepared to figure it out for yourself. Not that people are unhelpful but that they genuinely don't know any other workflows beyond their cloistered little OS with a barely-usable kernel.
See also resent thread about Zappo where there was lots of talk about customer service.
I think a solution would be that designers and engineers should have as part of their job to sit in customer service to support their products. Preferably for weeks on end to learn from their customers. Otherwise they will continue to produce bad solutions.
> US-based devs: outside of your country (that is, >95% of humanity), most people do not have a Mac, an iPhone, nor use MM/DD/YYYY dates and 12-hour clock.
This is an unsubstantiated claim, why bring up and bundle the entire United States to support your claim? I was with you until this.
It should be trivial for you to confirm that Apple is not selling enough iPhones and Macs to any more than a small portion of smartphones and desktop computers. Are you claiming that this is impossible to know?
In my current company, my manager, when I complained that my remote PC machine was a bit buggy with Docker, ordered me top-of-the-line MacBook, delivered in 2 hours.
I don't even live in US. But this culture certainly exists here too.
This source is not very trustworthy. No one usually uses 24-hour format in a few places not noted here, such as Holland, a decent-sized European country that many people have heard of. But they are not shaded in this chart. So I wonder how many others are also not shaded.
In many European countries, the 12-hour format is used in speech, but not in writing. You'd say "Let's meet at seven", but you'd write "Let's meet at 19:00". The context being software, I'd say it's a pretty solid claim to say most countries use a 24-hour clock.
From a UX standpoint there's a big advantage to monoculture: the user learns a thing once and applies the idiom everywhere. For most users, diversity of options means having to learn every option. They'd rather have one good thing -- or even one good-enough thing.
The world does have a variety of options, so users need to live with the fact that they're going to have to discover things, and designers are going to have to live with the hassle of designing for users with a variety of experiences. That makes the best possible UX worse than it would otherwise be. It's a necessary evil, but it's not surprising that a lot of developers try to wish it away, especially when they can get away with that, at least initially.
My understanding is that the monoculture being discussed is macOS rather than Safari. I would be surprised if most developers tested UI primarily on Safari considering, like you said, most people are Chrome users.
Anecdotally, most engineers I know use Chrome on a MacBook. The only people I know who use Safari are my parents.
That monoculture means you can't change anything because it breaks the learned idioms.
And there is no best possible UX.
UX is mostly subjective measured.
I've been guilty of this. Normally I like to blame CSS for all of my personal failures, but I think the definition of "overflow" is intuitive enough. I was hesitant to turn on visible scrollbars system-wide because I thought it might be ugly. I did and it's not, so I second the author's suggestion to do that.
I think part of the problem is that this "feature" masks the need to properly understand scroll behavior (at least if you only use platforms that auto-hide scrollbars).
Recently, we were doing a bug safari for a new product feature that was about to launch. I noticed that an element had a scrollbar when it shouldn't, and filed a bug. One of the developers picked it up, and first said "not a bug. I don't see it on my computer." I showed him the bug on my machine, and he looked at it for a few minutes before saying "I don't know how to fix it." I pointed out that the overflow-x was set to "scroll" in the CSS for that component, and he could just set it to "auto" to fix the behavior (since the scrollbar should never show up in normal usage of that feature). It turns out that he didn't even know "auto" existed.
On NTFS you can hide an arbitrarily large file in the "Alternate datastream" of any file. Explorer doesn't show you alternate datastreams and certainly doesn't show you they are possible.
You can use VLC to play a bluray from what appears to be a 1kb text file in Explorer.
If you have that on a USB stick, the properties of the USB stick will reflect the capacity loss of the bluray image. But Explorer won't show you why or which file has it...
Hidden scrollbars are just terrible UX all round for so many reasons:
- Discoverability: You don't even know if a container is scrollable at all until you try.
- Orientation: You don't know where you are in a document until you scroll to make the bar appear, and then you have to notice it before it disappears
- Usability: It's much harder to target the scroll thumb to drag it. To find it you need to again scroll first, then notice it, then target it with the mouse, all under time pressure before it disappears.
If you're on MacOS, do yourself (and your users) a favor and set them to always show.
But that’s silly when the preferred method of scrolling is a mouse wheel or a swipe nether actually require the scollbar as a touch/click target.
The actual useful UI left is “does this thing scroll”, “where on the page am I”, and “the grab and flick” gesture on touch to scroll faster.
In hindsight I think the mistake was having scrollbars be part of the document flow instead of hovering invisibly over the content until they’re needed.
The scrollbar does still provide valuable context over what scrolls, how big the scroll area is and how far you have scrolled. The second two you can solve by just scrolling a little to see the popup scroll indicator but discoverability is huge and I often am confused by this when I am using macOS.
scrollbar gives visual information and allows me to scroll to whereever I want at the speed I want, scrollwheel only goes at fixed speed
I'm talking about desktop computers here by the way, althrough grabbing a scrollbar and going to wherever I want at any speed would be awesome on a phone too.
I mean such programs as Konsole, Firefox, and others, in Linux desktop. Scrollbars got thinner and with super low contrast there too
I use an old IBM laptop with only a pointing stick (the red nipple in the middle of the keyboard).
I don't mind having to move my mouse to the scrollbar and drag it to scroll documents or webpages (when I can't use the keyboard for that already).
This was fine until a few years ago when scrollbars starting to hide or become super thin. I should not have to concentrate to be able to target scrollbars with my cursor.
If you're working on UX/UI design, please remember that not everyone has or uses a scrollwheel on their mice.
While on the subject of people creating user-hostile experiences, try disabling "Allow pages to choose their own fonts, instead of your selections above" in Firefox. Now instead of a different set of fonts for every single website because of some misguided form-over-content design wankery I can read things with one less distraction.
I actually tried that, and it has its own downsides. Sometimes pages get icons by drawing from characters in higher codepoints, and so the icons will get randomly replaced with Greek letters and random symbols.
A lot of developers I worked with only using their MacBook & iPhone to do tests. Many sites do not fully responsive also. For example, the github.com only support 1280px width and above. Below that, padding & buttons rendered incorrectly.
And I think this is a trend in the open-source that developers reject Windows patches, or never really try to fix the bugs.
Now a lot of tools are only designed for MacOS first. Linux is extra because it's easy to migrate. For example, Facebook Watchman. It's about 5 years but still unstable for Windows I think.
And many web tools, even Android development, I found it's no errors in MacOS but have to fix the config or do extra yourself to get it to work properly.
> In 2011, Apple released Mac OS X Lion which introduced an enhanced scrollbar behavior that made scrollbars hidden by default.
I don't think hiding scrollbars can really be termed an enhancement; 'mistake' at best or 'user-hostile behavior' at worst seem more appropriate.
The original Macintosh GUI succeeded so well because it made it so easy to see what was possible. It laid an awful lot of stuff out right there on the screen in front of the user, and what was hidden was easy to get at. They spent a ton of time testing an improving usability. Modern macOS, OTOH, just doesn't feel right anymore; I suspect it is designed to look good, not to be usable.
I noticed this a few years ago on a project I was working on, and ever since I've always enabled scroll bars on my work laptops.
More recently, a few weeks ago we had a bug in our app that was caused by an India time zone. Our company is fully U.S.-based so nobody noticed. Since then I've kept my computer in the India time zone so that time zone issues are readily apparent. I've already caught another one since then.
There's a more general principle here of "user empathy": configuring your environment so that you're in the same boat as your users.
I have noticed this too. This page itself immediately distressed me with its deliberate overdone use of `overflow: scroll`, so much so that I honestly almost left it without reading, which seems a little weird when I reflect on it, but… ugh, multiple useless scrollbars. I think this must be what people mean when they describe something “triggering” them.
I think there are two parts to the problem: ① a popular developer platform using overlay scrollbars; and ② the fact that `overflow: scroll` sounds like what people want, when it’s actually not (as you say, they wanted `overflow: auto`). If I could rewrite the history of just this one property, I’d rename `scroll` to `always-show-scrollbar` or `show-scrollbar-even-if-insufficient-content-to-scroll` or similar. Or maybe split `overflow` in two and use `scrollbar-show: always;`.
Hmm. I wonder if we could convince browser makers to kill off `overflow: scroll`, making it equivalent to `overflow: auto` due to rampant abuse (there’s precedent for this sort of thing), and replace it with a new, more clearly-named property `scrollbar-show: always`. (And `scrollbar-{x,y,inline,block}-show` to go with it.) Maybe `always` wouldn’t be quite the right keyword, given that it wouldn’t be affecting the behaviour of platforms with overlay scrollbars. But this actually sounds both reasonable and feasible to me, given that `overflow: scroll` is subject to rampant abuse due to misunderstanding and was basically only a tiny quality of life thing for certain corner cases in layouts anyway.
Wow, that's pretty bad. I'm guessing the front end developers of those sites just do everything on a Mac and never check how the site looks on Windows or Linux.
The reason `overflow: scroll` is a better default is the need to account for every screen size or possible dynamic content. If overflow is set to `hidden` or worse `visible` your UI is going to break on some screen sizes. So basically you need to decide if you want extra scroll bars for some screens, or broken UI when a user shrinks the browser. It's very time consuming to get everything right on every screen size on every browser on every OS.
`overflow: scroll` shows the scrollbars always, and should just about never be used. >99.99% of the time, you want `overflow: auto` instead, which only shows scrollbars if they will do anything.
Designers don’t get it. What do they teach in design schools these days? Genuine question. I hope stuff like this is brought up? How do you even become a UI designer?
I took a "designing the user experience" class in as part of my computer science curriculum (not in the design or art school).
It was much more rigorous than other design content I watch (e.g. skillshare, youtube), which are usually based on how something looks (emotion) and not the actual efficiency or usefulness of the design.
In the industry we really should distinguish between UI Architects and Aesthetic Designers.
On many Mobile Apps I also found that the scroll bar is just a small dot and it only appears when I do scroll a bit, so it's very difficult to see how much is left to read. This is particularly annoying because I have to scroll down to estimate how much is left and then scroll up back to where I was.
I have on idea what school of thought has brought forth the no-UI UI but I absolutely think they are idiots.
Ooo those hidden/slim scrollbars are annoying. It's as bad as an auto-hiding taskbar.
When I first saw it the first thing I did was to find a way to get rid of it. In Windows it's: Settings > Ease of Access > "Automatically hide scroll bars in Windows" > Off
Great read, and nice examples to show how important the issue is (at least, relative to web design).
This just reminds me of how lovely it would be to have complete interoperability between every browser, every OS, mobile or desktop... And how unlikely it is to happen in the near future.
Great article! Short read and proves his point excellently by calling out big sites that are making the mistake he points out. The Snapchat example is the best, looks terrible on my Windows machine on Firefox.
Help Scout have this discoverability problem with dropdown menus - there's no scroll bar when the content overflows, so I had no idea there were more options until a colleague pointed it out.
I felt absolutely ridiculous and ashamed after that. So, I make a point of designing all my software with visible scroll bars and other visible features that a user could reasonably expect to find without having to hunt or guess.
Sure, it doesn't look as "pretty" but it's so much more useable.
Whenever I looked for website templates to purchase, I always hated those that had a jQuery library to "restore" the scrollbar.
I misunderstood the intent and I thought that they were just replacing the scrollbar to style it, which as a side effect broke the behavior with a mouse in Windows/Linux, but it took me long to understand that it was because Mac OS removed them by default and it was not just designers being obsessed about styling the scrollbars...
I recall a point by Jobs et al. wherein the original case for GUI drop-down menus was motivated by a clear desire for discoverability, versus the arcane keyboard shortcuts of the IBM applications of the time.
That stuck with me for a long time, and I remembered it when MacOS and iOS first began departing from that philosophy with their modern looks. I would say Jony Ive contributed a lot to this transition from user-friendliness to design-led, form-over-function.
I would very much want a way to toggle scroll bars on and of within the browser, eg a Toolbar button that I can add through an extension.
The fact that I can only toggle it at an OS level is pretty bad. I love the invisible scroll bars and I would prefer to have them in all apps I use myself, but I do respect the platforms and users who have visible ones, but I shouldn’t have to choose between my own user experience and testing for their user experience
This trend is sickening. Windows 10 has a nearly 1 second delay when mousing over the Start Menu scroll bar. Why wouldn't I want a scroll bar there? It is the primary thing I seek when I click Start.
In the Gmail Admin interface for users, the scrollbar vanishes unless I have my browser nearly maximized. Do the designers live in a world where their browser is always nearly full screen?
You'd think huge companies would notice and care about these things.
On smaller screens I’m kinda ok with scroll bars appearing only when scrolled, but what’s with no scroll bars at all on some websites? That irks me like nothing else. I can’t even figure out how long the page is and whether I can finish reading it quickly or need to save it for a later time. What do the designers gain from such user hostile designs? And how are people paying for such designs without any oversight or reviews?!
I also have this issue with the disappearing window borders.
I remember on IRIX they were so bold, a real tool to be used. Same on Windows up to and including Windows 7. Now it's like you have a 2 pixel wide invisible border, hard to click, which may be the shadow of the window, or at the edge in the window; it's hard, specially when working over VNC. Nothing is gained by freeing up those 10 horizontal and vertical pixels.
> If you are a front-end developer that uses macOS, I kindly ask you to dedicate a little more attention to how the websites you create behave on platforms other than your own.
I think this is a lost cause. A lot of macOS people are "zealots" and they feel that anyone who uses anything else is simply "wrong." I've long dealt with designers like this and realized I can't win.
And discoverability is the no 1 problem of the command line interface, that's why a lot of people don't like the command line so much.
It would be a lot nicer if, while typing a command, a list of options came up, along with a short explanation and example, and by clicking the option the full documentation for it would come up.
But that cannot happpen, console apps have no connection to a UI.
All valid criticisms. The hidden scroll bar is also really hard to use. It removes the ability to jump to a position by clicking. It's hard to see sometimes too.
You'll notice that Apple loves to champion usability (HIG etc) ... except when it flies in the face of their minimalist aesthetic.
this is my most hated Apple "UI modernization." Its not just a problem with website, apps, even Apple apps don't deal with it well. I always turn on scroll bars all the time (I do a lot of work with very long documents or files, and just scrolling with the scroll wheels doesn't cut it for these), and every time I do a VNC connection using the built in screen sharing app, the scroll bars are placed in the remote screen area, rather than outside it. So every time I connect to a remote screen, the first thing I have to do is enlarge the window so I can see everything.
Note that in some instances `scroll` instead of `auto` is warranted. When a scrollbar is placed because of `auto`, its contents are laid out differently than `scroll`, and might cut out some content.
I fell victim to this while implementing my personal website. An easy fix to be sure, and one that I think is worth it for the hidden scroll bars (which I prefer), but kind of annoying.
This problem is so endemic for so many years now, I wonder why the browser devtools have options to turn off caching but not option to turn on the scroll bar during dev mode.
I really like the dichotomy presented in the argument that I should change my settings so scrollbars are always visible because they're ugly and not needed.
and then there is the new firefox for android which decided to just remove the scroll bar entirely in its new tab list. so if you have lets say the now usual amount of open tabs, you don't have any idea where or how far along in the list you are. i wonder what the reasoning behind that was?
I have dealt with this problem in an Electron app I designed and support. The different platform scrollbar behaviors drove me mad because of my insistence on a common experience.
I really like the hidden scrollbars that Apple settled on. They are easy to identify by hovering the cursor over scroll-able content and they just act as an overlay over the content. In windows, the content is actually shifted the width of the scrollbar which is terrible for UI consistency in some cases. There is the problem that hidden scrollbars remove the "discovery" aspect of traditional scrollbars, but I find this to be a very minor loss in practice. I don't miss having a sliver of a scrollbar for large content blocks.
I ultimately just settled on styling scrollbars in CSS, making them a bit slimmer and forcing that behavior on the Mac for consistency. Scrollbars look nice, match my UI look and feel, minimally shift content and look consistent across all platforms.
I threw away all of the custom JS approaches that try to mimic Apple's solution because none were perfect and, in every case, introduced new problems that disqualified them entirely.
"I threw away all of the custom JS approaches that try to mimic Apple's solution because none were perfect and, in every case, introduced new problems that disqualified them entirely."
I mean that a Mac with scrollbars enabled is still a different layout than Windows or Linux with scrollbars enabled. They all have different appearances, widths, etc.
Really, am I the only person here who can't understand the complaints on hiding scrollbars? I do agree with a lot of things that resonates HN, but this is one thing that I have never understood.
Stop complaining about discoverability, everyone that knows how to browse the web knows how to scroll. It's like worrying whether the user will know how to use a mouse when you're designing a web page. Scrolling is not something like shortcuts or tabs - you need it to move around, which means you get to know it as soon as you start your computer and after the first knowing phase it's visual clutter.
Also, I would like to ask whether users will usually even want to click on the scrollbar to move around. Unless the small number of cases where the page is long and you want to move around quickly (and if you have ever been in that case, you know that even there using the scrollbar is pretty useless because it's too sensitive - you are usually less precise when you use a scrollbar, not more), there's virtually no reason to click it. Computer screen estate is precious because not everyone is using a 27-inch display (13-inch display here), so that's a pretty big reason to hide the scrollbar.
Maybe I'm missing something obvious, but I can't think of a reason for scrollbars to exist in the first place. You always have a physical way to scroll. If a user doesn't know how to scroll, it's a problem with the device, not your software. The only purpose of the scroll bar is to show your current position in the page, which doesn't require persistence.
It's sometimes also the only indication that an element is scrollable. Usually there are other cues, an unfinished sentence or partially clipped text/graphics, but sometimes you're left with a perfectly plausible looking clip and would never guess there's more underneath.
For anyone (cough, elderly parents) who aren't adept at discovering hidden features, these things can be utterly mind-boggling and frustrating. Even I was stumped for a good minute the first time trying to print/save/download a PDF when that "feature" came out.
I don't really need the small sliver of menu space in PDF view to be reclaimed -- and for what, a "clean" look? Those are real and important functions I desire. What I actually need is for news and blog sites to stop covering 1/4 of their vertical window space with hovering frames, ads, and banners asking me to subscribe. Which, by the way, subsequently don't properly calculate into that now hidden scroll bar's movement and cause you to overshoot the displayable area when paging down. End rant.