Hacker News new | past | comments | ask | show | jobs | submit login
Speed Is the Killer Feature (bdickason.com)
569 points by bdickason on March 2, 2021 | hide | past | favorite | 528 comments



He's kind of totally wrong about the phones and thus the speed being THE killer feature. First of all, Symbian phones, which were the market leader smartphones when the iphone was released were pretty fast. So were feature phones (i.e. dumb phones).

What iphone was a LOT better at than everyone else was UX. Of which speed is one component, of course. It's funny how much people never get it although it happened in front of us, it happened to us. At the time I was working at Nokia Research and I remember my girlfriend telling me how his boss got this wonderful phone that you can take photos with and you can view them, etc. The funny thing is that I had such a phone since 2001. I have been working with smartphones for 6 years then, she knew it, she listened to me when I told her or others what I was doing (and then listen to others responding "yeah, but phones are for making phone calls"). She saw me browsing the net on my phones (a 9210 communicator and then a 9500), send emails from the beach, etc.

Still it somehow didn't register. Because it looked like something that she'd never use. And then the iphone that did a lot less made her and basically everyone understand what a smartphone is. (Even though by then symbian smartphones were pretty common, most people didn't use them as smartphones.)

So no, it's not simply the speed. It's the UX. And even if we talk about speed, it's still not the speed, but it's the perception of the speed, which a lot has been written about: delay (lagging) matters a lot even if speed on average is OK.


As long as we're offering opinions on the iPhone's killer feature, mine is that it was access to desktop web sites.

Remember WAP[1] and WML[2], the HTTP and HTML substitutes for mobile phones too anemic/limited to support the real thing? Back then, many web sites simply didn't support access from a mobile device. (It's the polar opposite of "mobile first" or "mobile only".) A few did, but many just tossed up an error page.

With the iPhone, Apple put together all the key ingredients to be able to say, if you're on the go and suddenly need to access your bank's web site to check your balance or whatever, you will be able to, even if your bank doesn't support mobile devices. The experience may not be great, but it will at least be possible.

Those key ingredients included a big screen, a fast enough processor and large enough RAM to handle pages that were somewhat bloated, a browser that supported enough (JS, etc.) to make most pages work, and special features for making the most of desktop-oriented pages by zooming in on text. To some extent, Apple brought these key ingredients together by designing it that way, but they also did it by not entering the market until powerful enough hardware was available.

The iPhone flipped mobile web access on its head. Instead of implementing whatever was convenient and punting on 50+% of the web, leaving users at the mercy of web sites to decide if mobile access was worth it to them, Apple created a device and browser that took responsibility for doing anything and everything it could to make sites work.

The web is a killer feature for the internet, and getting meaningful access to the web was a killer feature for internet-connected mobile devices. Paradoxically, it worked so well that the platform was enormously successful and it became essential to offer mobile web support.

---

[1] https://en.wikipedia.org/wiki/Wireless_Application_Protocol [2] https://en.wikipedia.org/wiki/Wireless_Markup_Language


Windows Mobile came with a regular browser for years before the iPhone. There was nothing crazy about first iPhone's specs either, pretty in line with the rest of the market. While it wasn't big with regular consumers, WM had a decent smartphone market share at the time (Windows Phone would never come close). Amongst executives, managers and the likes, it was absolutely dominating. When it first came out, many WM users (including Microsoft) considered the iPhone to be a joke. At best, another player in a crowded market. Sure, people liked some features. But they were considered easy-to-replicate gimmicks (and many were indeed quickly copied by enthusiasts). WM was way ahead as a platform. iOS didn't even let you install apps! Apple PCs/laptops were fairly niche at the time too. It wasn't obvious how Apple had a defensible advantage.

Quite frankly, I still tend to think it's as much about Apple knocking it out of the park with the UX (and marketing), as it is about Microsoft doing literally everything wrong in response. WM could have become what Android is today.


Nokia smartphones had http+html browsers back in 2001. I don't know how many mobile optimized sites were there, but the browser on my 9210 was pretty usable (though not everything worked, of course). I used to save full pages on my desktop and copy it on the 9210 to read while on the go. (Well, we didn't have Pocket back then, and while I definitely wanted to develop something similar, I just couldn't make myself do it in Symbian's C++ dialect.)


I disagree in that internet on the original iPhone was terrible, and everyone I knew who bought one didn't really do much browsing on it. It was the UI/UX. Being able to touch-drag was so different from the indirectness of the buttons on previous phones, or using a pen on a squishy screen (I was the nerd that was previously into Palm and Windows CE(?) devices). I remember that what my friends who had the original iPhone showed off wasn't primarily the browser, but rather the pinch-zoom feature on the photos they took, the drag-bounce effect in apps, and of course just the whole thing of having a big screen with no keyboard or somewhat gaudy keypads.

But I agree also agree with you in that my own first iPhone was the iPhone 3G, and the 3G part and later the app store became a major thing for the device, whether it was browsing the web, or using internet-centered apps (chat, sms, reddit, etc.).


Agreed. When I didn't need to boot up BREW or whatever it was called to access google at what felt like 1Bps anymore, it was a game changer. This alone was a huge improvement - I had real access to the web all of a sudden, from anywhere.


This is a great insight, I’m reading this from my iPhone now!


I think both you and Brad are right in some way.

The KILLER feature is the total time to do something that the user intends to do.

If you have a very fast OS, but bad UX then the rate-limiting step is the UX, not the OS. And the converse is also true.


Your apps UX is the language your users must learn and speak to convey their intent to software.

Having an expressive vocabulary and complex grammar is great for saying a lot quickly if they’re fluent but painfully slow for anyone who isn’t.


IMHO it was more a problem of common functions being buried 5 menus deep in a sluggish UI.


Came here to say exactly the same. I would add the capacitative touch screen as another crucial factor that made the iPhone UX so popular.


The capacitive touch and the accelerometer allowed them to make a web browser that could display 'normal' web pages. Up until then everyone had been dicking around with mobile web sites and the lack of ubiquity and cost of doing so... as well as the often hamfisted attempts to assume why you were on the website from mobile... all of these hamstrung mobile browsing adoption.

With this in place commerce could begin on the phone. Once everyone added mobile pay options it could end there as well. An now everyone has one, if they can.


My mom had a touch-capable phone with a resistive touchscreen and hated it. Her fingernails were not huge or anything but they were long enough she had to press with the pad of her finger, not the tip, and it crippled her accuracy.


I think you're right. Speed is a component of the User Experience. My point in writing this is that when you abstract to a higher level, the beauty of the UX was that you can instantly do whatever you want. Your thought -> your touch -> action.

However, I think you make a great point that the two are interrelated.

My straw man starting point would be: A poor user experience that is lightning fast can still be a great experience.

But a great user experience that lags or is slow will typically not be successful.

The iphone succeeded because it coupled a great user experience that was so fast that it felt like interacting with objects in the real world.


AOL was very successful, even though it was slow and laggy, and calling the UX “meh” is being generous.

The iPhone won because it looked amazing and had the App Store. Looks and features. How did you reach the conclusion that it was speed?


The app store for native apps come around a year after the iPhone.


So? That was an early adopter year. For the vast majority of people, the iPhone has always had an App Store.

Actually, that supports my point. The App Store came by because people wanted it. No one said: “oh no bloat my phone will slow down”.


I think you’re talking on a different level of granularity. What was really different? “The UX” sounds generic. Whereas Speed is very specific. Speed is part of the UX.

It would help me at least if you could specify/ list what you think were the things in UX that made it so much better than Symbian phones.


One word : Gestures.

We can think of things like slide to unlock but much more importantly scrolling. If something shows more the mastery of the iphone UX of that time it is definitely scrolling, who categories it as gesture now? It's completely normalized, on all other Platform of that time, you had to play with arrows and the scroll bar. Now every platform has it.


Ah very good. Yes, I agree. Simple things like a scroll bar in a browser window right? I remember how difficult it was to tap that small little icon on the lower right corner of the screen. They simple copied mouse behavior onto the phones. And Apple that this out from a 0-level perspective. Thanks for the insight!


Using a touchscreen pre-iPhone usually felt like executing commands on a computer application.

Using an iPhone felt like directly manipulating the underlying content. It was a qualitatively different experience that only superficially resembled previous touchscreen devices insofar as it used similar input hardware.

Note that Apple didn't invent the concept of a responsive UI thread with physics-based UI metaphors, for example Jeff Han demonstrated a fairly sophisticated example at TED2006[1] the year before. But to my knowledge the iPhone was the first mass-produced device with a direct manipulation interface.

[1] https://www.ted.com/talks/jeff_han_the_radical_promise_of_th...


Maybe. But the author says speed IS the killer feature (for products) so it has to be true at the higher level too. UX is the perception of the user of the product and their, well, experience using the product. If it's generic it's because it really is that generic, because users won't know why exactly they like a product.

But in the case of iOS vs Symbian:

- as others said: capacitive touch screen (this is not an OS issue, but iphone was among the firsts to use it, definitely earlier than Nokia). This is huge. Like the thing that everyone was talking about (around me) when the original iphone came out is how you could swipe to see the pictures. And it wasn't just for paging, it defined how you could interact with the phone (think pinch zooming, and rotation - not sure when these were added).

- the touch screen UI itself. Nokia played around with the touch UI before, but never really liked it. It was expressed several times internally, that touch is just a no go. But no wonder: the resistive touch screen is pretty bad, but also Symbian itself was built on the assumption that all you have is keys while iOS was built with touch UI in mind from the very beginning. (Now, of course touch was added to Symbian, but that's just not the same. Or they didn't put in the effort. Nokia even had an experimental touch phone released to the market in 2003, the 7700[1], but it was mostly ridiculous.)

- the UI just was a lot more polished, looked better, classier, the graphics was better. They had OpenGL and probably a graphics accelerator - nothing like that in Symbian, of course. (It even took the android guys by surprise, I remember reading/hearing in an interview that when they saw a demo or the release, they've realized that they had to redo the UI from scratch. Because before that they had this Blackberry-ish/Symbianish idea, they thought they were competing with that.)

- I'm pretty sure it had a better browser.

And this pretty much defines the experience, the feel a user gets from the phone. It couldn't send or receive MMS-es (some people may have used it then, but most I guess just wanted to have the feature), it couldn't receive 'push' email. I.e. you had to manually refresh your inbox, emails didn't just arrive. It didn't even have apps. Symbian had all these. It has had these for years then. It even had an app store like thing (at least you had to send in your app for verification which would then be signed by Nokia or it couldn't be installed - that was a new thing around 2004-2006, something I think nobody really did before).

[1] https://www.gsmarena.com/nokia_7700-570.php


My follow up question would be: Would a capacitive touch screen with 0.5s latency have made this the killer feature? Or did the capacitive touch screen enable high speed input?

I'd argue the latter, but it could be a question of framing?


I don't know. Again: I think the killer feature was the whole UX. Slow response times is definitely disturbing. (All my android phones got into this state sometimes.) You just don't get that feeling, but it couldn't possible happen to iphone because the the UX was the center of the whole product. I'm not an apple fan (never had an iphone, and the early ones kept pissing me off when friends asked for help) but it's obvious that they are obsessed about UX and polishing the UI.

But you are right that the capacitive display itself makes the interaction faster because it's enough to touch while the resistive has to be pressed. So it's probably slower and feels like you have to put in more effort.


I'd say based on the number of friends I have who use old or cheap Android phones with terrible latency, it really was the touch screen and UI/UX that played the main role.

Sure, for many, iPhones are still preferred because of the low latency, so it matters, but I suspect the iPhone would've done just as well if the latency was bad. The competition was about finger vs keys/pencil, not latency.


Thanks for the details. I remember the capacitive screens. They were awful! :)


Well, capacitive is what we have today. The old ones, that you actually had to push (and not just touch) were resistive :)


I meant those!


We see this time and time again where technology needs to be introduced multiple times before it gets adopted. The killer app is always use case.

It helps that the iPhone was a iPod with a phone attached, instead of a phone with a multi-use compute device attached.

OG iPods were single purpose music players, and features that made sense were slowly introduced over time (and were optional). Adding support for photo viewing made sense because album art is universal and well, album art is no different than a photo. Adding video made sense because you have this nice color screen for showing the photos/album art, and music videos are a thing people enjoy. Then adding a camera make sense, because you can already view photos/videos. Once you have all that in one package, adding phone capabilities makes a lot of sense when you realize that people are carrying around iPods along with a cellphone.


I think it's partly that but it is also a matter of timing.

If we can consider the iPhone to be innovative, we cannot overstate how much timing was important.

The iPhone was a phone with a big screen optimized for the internet. Regarding people who are not into technology,for what I can see, their main interest to go for the smartphone has been whatsapp and free phoning in general. And as time went on, more and more services of all kind including administrative ones where more participial to use on the internet that in real life.


The iPhone “did a lot less” than competing phones but the vast majority of users with an iPhone could do a lot more with the iPhone than they could with a competing phone.


I had a Sony Clié PDA which ran .swf flash games and the home screen had a grid of icons much like the iPhone 1 home screen. It was a gorgeous full screen display with touch and stylus. This predated iPhone by a few years.

Any else have a PDA and see the glaring opportunity to add cellular functionality to them?


I never saw (or see) the need to combine them. Early 2000s I figured in-ear phones (headset minus the phone) would be a thing Real Soon(tm), and then a PDA would be all I need to be productive. I still miss Palm and the apps available on it. Naturally, there is benefit to a connected PDA, but cellular? I wouldn't miss it, if my phone was just my headset.


It was actually kind of slow at the time, especially it’s network connection. I agree the UX was a game changer. Big screen and responsive touchscreen made it a joy to use. For me, google maps in my pocket was the killer feature, and it worked well even without GPS.


I vividly remember using a kiosk to order a sandwich at a gas station 3 years ago... Not because the sandwich was great, not because it had a great logo, or a great name...

The INSTANT I hit the button to complete the order, the built in printer almost spat the ticket at me. I ordered a second sandwich just so I could get a video of that happening again.

Edit: Just found and uploaded the video :) https://youtu.be/TX_-dXIpPvA

Edit2: looks like it was a soda, not a second sandwich.


I wish I had that at work. We have self serve kiosks in the cafeteria and my muscle memory has made me faster than the display. I pretty much operate it in a constant loading-icon state now. The part I actually wait for is the 2 seconds for the printer.


And then over here we have a fast food chain whose kiosks are just laughably slow. Scrolling stutters, animations are laggy, and taps take what feels like a second to register. Burgers are okay tho.


Holy shit! That's like the Rolls-Royce of kiosks. I wish car infotainment systems had this level of responsiveness...


This is amazing!! In a world full of 'please wait for your receipt' and dot matrix noises... this feels totally magical.

Thanks for sharing this, it's crazy how much super fast experiences still surprise us.


Interestingly... too fast can be a problem too. A prime example is this site, Hacker News, on a really fast browser. I'll often sit there waiting for a navigation to load, until I realize it had already loaded (to a similar looking page), just so fast I didn't notice the transition.


That’s why animations are important contrary to what many say here in the comments (and I agree that putting them here and there just for the sake of it is bad). They give “life” to a virtual object. For example I really like tiling window managers, but they could use a really fast animation for window switches because the changing number at the top of the screen doesn’t say as much as movement to my primate brain.


When I saw NJ in the video description I knew it was a Wawa before I even hit play. Those kiosks are their bread and butter (no pun intended).


Wow, even the printer is fast in itself.


That's a very normal Seiko (or similar) receipt printer. I spent a lot of time programming them. Humorously, the programming manual is marked "confidential" (I guess to make it hard for anyone to make compatible printers), but there are copies of it all over the web, and there are plenty of compatible printers ;).

The POS app that I worked on (not related to the one shown in the video) also went to pretty serious lengths to get rid of the pause between the user pressing "enter" and the receipt coming out. The store operators rightfully insisted on this, because they wanted to keep the checkout lines moving as fast as possible.

I liked those printers and remember wanting one for myself even though I had no use for it. They start at around $200 and take up space, so I managed to resist.


That's amazing, thanks for sharing.


as a frequent patron, I read your comment and KNEW it had to be Wawa!


this is to be expected, wawa exudes excellence in all it does.


Probably because there is a financial incentive for speed. :)


And yet so many are slow as mud


Wow, someone in HR at work should find that dev and get him/her to work for us :-)


I would also say the Costco self-serve food kiosk order system are fast as well


I don't know how Costco does it, but if you use tap to pay or insert your card into the chip reader while the cashier is scanning your items, at the end when the cashier presses the button to indicate they are finished scanning, it instantly says "approved" and prints a receipt.

Costco is the only place where I've seen this. I don't understand how it gets an authorization for any amount that fast, since it can't know the total while the cashier is still scanning items, and it's Costco, so it could be anywhere from $50 to $5,000 so surely it's getting the authorization after the transaction finishes? The flow is almost perfect. I have them scan my Costco membership, I use tap to pay on the card reader, then I or 2nd cashier move to organizing the items into the cart, and then the cashier hands me a receipt with basically zero wasted time.


Costco has an advantage over everyone else -- they already know who you are before your purchase is complete. By scanning your membership card, they already have your average purchase profile.

They actually ask the credit card processor to approve you for $avg + X%, so as long as your purchase comes in lower than that, you've already been approved. If you make a really big purchase it will take a little longer, because they go back for a second auth for the bigger amount.

It's also why you'll see some people making $700+ purchases without having to sign anything -- because Costco already knows they do that every week and pay the bill on time so they assume part of the risk.


Interesting, I didn’t think Citibank and Costco would have made that kind of arrangement.


Citibank bent over backwards to get Costco. A few years ago Amex was the only card you could use at Costco. Citibank agreed to not only make a new card with better rewards to compete with Amex, but they agreed to honor all the reward points too. They just rolled over from Amex.

I'm sure Costco got a deal that gives them nearly at cost processing and a bunch of other stuff.

It was worth it for Citi too. The moment I got my Costco Amex replaced with a Costco Citi, the Citi became my primary card, because everyone takes Visa.


Walmart sometimes gives me the "approved remove card" notification BEFORE the cashier has finished ringing up the purchase; I assume they've made a deal that lets them do that.


They do have a very small menu though.


Damn I miss Wawa


This I think is a key reason Netflix is a default ‘channel’ in my mind, whereas Apple TV, amazon prime and Disney plus are all just apps.

Netflix is faster in every way. There’s a button on my TV specifically to launch it, the videos start faster, fast forwarding is faster, there’s less buffering in general. Every single touch point is fast. And it’s because they put the effort in where the others didn’t.


Definitely. I don't know if developers for e.g. TV apps get much choice in the matter, but it's like native vs webapps. The Amazon app feels like a webapp, while Netflix like a native app (this is on LG's WebOS).

And I know Apple is a weird one there. On the Apple TV, they offer pretty much a version of iOS. There's multiple options to build your UI, but iirc you can build it native if you want to.

And this has been Apple's differentiator; they were FAST. The code for apps compiled down to native, as opposed to a lot of Java based phones at the time (and later with Android).

I've always maintained that Apple had a 5 year head start on Android when it comes to performance (as well as UX, even in their skeuomorphic designs), and after 5 years it was mainly Android smartphone companies focusing on more performance than the Android OS or apps becoming faster. It was Android phones that went for quadcore (and beyond) processors first, while Apple was just fine with a single core, and later, almost reluctantly, a dualcore. Simply because their earlier technology choices made their stuff so much faster and more efficient.

I'm so glad Apple didn't go ahead and make web technology the main development path, as they initially planned (or so I gathered).


Yeah it definitely feels that way. I reckon it has a lot to do with the servers too though. Even netflix.com is far superior to prime video / apple tv+ browser versions. In fact it feels virtually identical to the app version.


For Netflix it has a lot to do with how they integrate with the TV's. They tend to integrate directly with the chipset vendor, and then ship their own SDK that the TV vendors integrate. Everyone else is relegated to use the terrible shitty webapps like development with no debugging capability. So for Smart TV's at least, Netflix is on a whole different level than everyone else.


Netflix is so much superior to Prime. Prime has a hard time maintaining 1080p but Netflix has such varying bitrates from as low as 1mbps to as high as 20mbps while watching The crown. And the best part is how snappy the app itself is and instantly starts playing anything. Apple and Prime have a lot of work to do. Prime is possibly the worst streaming platform currently. Though their own fire stick is superior in every way compared to iOS apps or the web app.


I’m in Japan, with an English-language Amazon account, yet Prime insists on displaying Japanese subtitles on absolutely every thing I watch. Doesn’t matter what the original language of the content is, doesn’t provide an option to turn it off. Huge, bright white subtitles - much bigger than what Netflix uses. Been this way for years.

Sometimes I have fantasies about sending an email direct to Jeff Bezos just to say: dude, did you know about this?

Suffice to say, I don’t watch much Prime.


It's a lot of stuff like this that adds up. Prime and Disney + also seem to completely forget what I'm currently watching all the time - and even if they remember, it will restart a minute or earlier than I left off. Netflix is always bang on the second. Netflix also always has a 'skip recap' and 'skip intro' button. These things don't happen by chance. Someone worked hard for that!


Seems like all streaming providers have issues with global licensing. Why can’t I pick from all the languages that the provider has available? Why lock it per country Netflix?


In Belgium we have a 60/40 language split between Dutch and French. Amazon Prime insists to promote to me dutch things (with a special section in the home screen), while I live in the french speaking part. No such issue with Netflix of course.


They are not only displayed, they are hard coded and there is usually 2 separate videos (dubbed and subbed) for non-Japanese content. That being said, new content is starting to have multiple audio and subtitle tracks (like Netflix).


May be dumb to say, but have you checked to confirm you don't have subtitles enabled? If you bring up the player controls you notice a little "cc" in the lower right hand corner (at least in the English version as shown on my TV). If you click on the cc you can configure the closed captions, turning them off or on, changing language, or changing color.


Prime has burnt in subtitles for approximately 90 % of the content outside the US. Which actually might have something to do with their garbage-tier video quality: unnecessary recodes and burnt in subtitles = multiple copies, so more incentive to use bit rates straight from the shitter.


The terrible quality and slowness I think is a remnant from them buying LoveFilm and rebranding it as Prime video all those years back, which was based on that awful Microsoft DRM that I can't remember the name of, was it Silverlight?


It was silverlight. Might still be installed on my desktop.


They are hard coded on Japan prime video (mostly).


Netflix also did a bunch of deals with ISPs and essentially have mirrors of their catalogue in all the right places, and use machine learning to guess when different shows will be watched and shuffle around what they have in those caches.

Meanwhile Apple TV+ will just go ahead and try to use a super heavy 4k stream on my iPhone over 4G - won't even let me download it (at least this was the case a year ago, the last time I was out of wifi range).


There is one very niche area that Prime is quite good and it's their VR app. I have Prime on my Oculus Quest and they've really nailed the player itself, it's like being in an actual cinema. Netflix also have a very good VR app, but none of the other streaming platforms I use have put in any effort in this regard.


Prime has 4k UHD for free, while Netflix doesn't. For me, that's quite an advantage.

Also, given the choice, I'll rent a movie on Amazon because they even give refunds if they detect the quality was low.


Personally, I prefer Disney+ over Netflix/Prime as my experience is that Disney+ comes with more subtitles. Most of the time I can get Dutch subtitles while with Netflix UK that's not the case.


For me in germany prime always has instant, rock solid 4k.

Netflix also never stutters, but it starts with like 480p and gets better over time.

Nevertheless the Netflix UI is far superior.


The HBO app on my (admittedly low end and not brand new) Roku is laughable. You have to select a program to watch, put down the remote and go grab a snack or something because it takes about a full minute to load the screen with the show details. All that to load some thumbnails. And if you dare accidentally hit a button before it’s fully loaded, it will crash the app half the time, and the whole OS about 1/10 times. Instead of replacing it with something more performant I use it as motivation to not watch TV and go exercise or read a book instead.


Agree 100%, one of the main reasons I’m resubscribing to Netflix. Just wish everyone wasn’t pulling their contents to their own streaming services. Have only tried Netflix, AppleTV+ and Disney+ and have to say that Disney+ is worst of them all with sluggish UI, webpage constantly crashing the Safari tab it’s running in with the ”using too much memory” error. And to add to it that even if you pick english as your language it still serves animated with movis with the actual signs/text in them in the localized language.


You never used Disney Life, the decrepit predecessor to Disney+. Now that was a shoddy app. It used to forget your password every time it did an update (which was very often), it crashed all the time and didn't even have very much content. But I still subscribed because... kids. This is how they knew Disney+ would be a goer, they already had a lot of poor parents sending them nearly as much money for a TERRIBLE app with relatively little content.


Netflix is faster, but Netflix also "always works". If I click on a button, I never get a timeout, it never does nothing.

The amount of engineering work going into that must be amazing.


My experience with Netflix is the opposite. When you press play after pausing a show, there is an eternity during which it has a dark overlay blocking your view of what you're trying to watch.

Why do content streaming platforms assume that you want to watch everything other than the content?!

YouTube does this too on my TV, and it's infuriating. Not only does it hide half the screen for a long time after the content starts playing, it then helpfully hides most of the screen before the end of the content also!

This is the computer equivalent of someone shoving their hand in your face to block your vision.

It's rude when a human does it. It's rude when a computer does it.


This surprised me, because on my iPad it takes over 15 seconds from launching the app to choosing who's watching.


Apple TV and Apple Music is particularly bad for this, I wouldn't be surprised if the apps are just showing a web view rather than native controls.


I find this argument completely ridiculous. As if content is less important than UX. Give me a break.


Don't get me wrong, I still subscribe to all these services, because of the content.

But I'll always check Netflix for something to watch first because it's faster and easier (unless there's something specific I know i want).

Being the default first choice is very valuable, and speed is the reason they're it.


Content is more important than UX, but bad UX is a deterrent from enjoying the experience of finding or discovering what you want to watch.


This might be because I am a former semi-pro Quake3 player but these days I grind my teeth with 95% of all software. Everything feels like it has at least 200ms delay injected, on every transition.

I'd honestly pay extra for an iPhone where I can disable ALL motion, too. But that's the least of the problems.

I don't want to become the grumpy old grandpa yelling "back in my day!..." but we have waaaaaaaaaaay too much young JS devs who know nothing else.

We need to get back to native UIs. Is it awfully hard? Yes it is. Do the users care? No, they don't. Many people want fast UIs.

But to be fair to all sides -- there are also a lot of people who don't notice certain slowdowns that I scoff at. So there is a middle ground that won't cost billions to achieve and IMO that should be chased after.


It’s awful. Everything web-based is slower on my 4.5 ghz MacBook Pro than things were on my 300 MHz PII running Windows 98. Every web page causes MacOS to complain “Safari is using a lot of power.” I was hunting for a project management web app, and one ate up 10% of the CPU just sitting there doing nothing. This has gotten particularly bad with Microsoft, with Word and Outlook on the Mac, which just kill battery life. (I think they’re using more and more JS under the hood, and I hear Outlook is slated to be replaced with a web app.) Teams is a bloated pig.

The crazy thing is that all these web apps also do a fraction of the things that the native apps used to do. They’ve somehow managed to strip down all the features while making the apps slow and bloated. Watching Microsoft’s To-Do blog is comitragic. Elon Musk will be living on Mars before the Microsoft tools allow you to schedule todos by dragging them to the calendar like Outlook has done since what, 98? (You can drag a todo from the web sidebar to the calendar now—but it somehow doesn’t actually schedule the due or start date in the todo itself or even have any link back to the todo.) And I feel like that’s one thing that’s different now. I also complained that Word 97 was a slow bloated big compared to Word Perfect, etc. But back in the day there was feature bloat. Now, everything is both slow and and non-functional.

I have to assume that it’s a structural thing with the industry. Machine learning, big data, security, etc., has become the hot areas, so all the “A” teams have migrated over there. I hear Apple is having trouble even getting people to do kernel work on MacOS.


I'm convinced my retirement gig will be writing nice, native apps for my platform of choice.

They won't bring in a ton of cash, but I can continue to make beautiful apps that are fast, focused, and respect the user's time and computing resources.


I just made one of these! I learned Swift to build it. Fast, focused, uses as little memory and CPU as I can manage for a (lightweight) video editor.

It's been fun to work a bit closer to the metal than I've been with JS for the last few years. Made about 50 sales so far. Can't imagine it'll make me rich but maaan it makes my video editing way faster :D


Your app seams great from what you have on its webpage. But the webpage made my AMD Threadripper based tower spin up the fan like hell broke loose. Closing the tab in Firefox immediately stopped the noise.


Great work on the product and marketing copy there!


Thanks!


Thats why I designed a Haiku native video editor with over 30 effects that does 4K UHD video, 3D extruded fonts, GLSL pluggins, and the package is 1.2Mb in size (Medo for Haiku OS)


Things is a great example here. Lightning fast, lets me quickly add or re-order todo items, and does nothing else.


Which GUI framework will you use?


If I had to pick right now, I'd choose macOS for a platform.

For tech, I'd consider both Cocoa + Swift and SwiftUI as candidates for UI components, on a case-by-case basis. Swift is not my favorite language (feels like I have to use Xcode; have yet to try out the JetBrains IDE), but it gets the results I want. Perhaps in the future, we can use Rust in a more ergonomic fashion to talk with native UIs.

Honestly, I'd love an ObjC-like language that interops with ObjC and has strong static typing with a dynamic typing escape hatch for metaprogramming.


The JetBrains IDE for it (AppCode) is pretty nice, but you have to use Xcode for storyboards and UI design; other than that, light years ahead of the Xcode experience.


IDK, AppCode always seemed so resource hungry.. but yeah it's worth a try I suppose. I believe the Xcode experience isn't too bad however.


Using a bloated non-native app to develop your elegant, fully native app. Uh huh.


Java is fast, unlike JS. Perhaps one day JS will be fast, too.


Good to know, I'll give it a shot!


My uninvited suggestion: take a look at the FOX Toolkit. A truly lightweight non-themeable GUI toolkit written in C++, for Windows and Unix/X11. It's actively updated, but it's essentially a one man operation these days.

http://fox-toolkit.org/


The first screenshot they show you (on the screenshots page) is a Windows XP program. I can't say that inspires much confidence. Am I wrong?


I can confirm it compiles with the latest Visual Studio and runs fine on Windows 10 in both 32-bit and 64-bit. (Well it did last time I checked, haven't tried the very latest release.) You're right the screenshots are ancient, but the code itself is still being updated by the project's maintainer Jeroen.

The FOX codebase isn't terribly modern, as it's older than the standard C++ concurrency machinery, but it works.


It does look dated, but I use it daily (I use the xfe file manager) and it is bloody quick - every action is almost instantaneous compare to the KDE, gnome, mate or cinnamon file managers.

It depends on the target market for your application I suppose - if your target won't be happy unless they have html/CSS or similar animations, then using something with low latency isn't going to make them happy.


> It does look dated

Personally I don't mind the Windows 98 look, it strikes me as clean and no-nonsense. Everything is clear and high-contrast. Unlike with many 'flat' themes, it's generally clear what's clickable. I realise not everyone likes the Windows 98 look though.

If someone is serious about developing fast GUI apps, trading off on themeability is the kind of thing they should consider. As you say, FOX really is fast. I presume this is because of its uncompromising hard-coded native-code-only approach - it's just a C++ codebase. All the drawing operations are implemented directly in C++. Unlike Qt, there's no JavaScript. Unlike JavaFX, there's no CSS. It's all just C++.

Perhaps a GUI toolkit could add themeability without any performance impact by implementing it as a compile-time abstraction.

> depends on the target market for your application I suppose - if your target won't be happy unless they have html/CSS or similar animations, then using something with low latency isn't going to make them happy

Right, but mattgreenrocks said fast, focused, and respect the user's time and computing resources, presumably in contrast to current norms.


OUTLOOK! Jeez has it gotten slow on my mac. I am not a particularly fast typist, but I can routinely out-type outlook by a whole sentence. Moreover in the latest version, if I hit command-R and start typing it will routinely take so long to just start replying to a message that it will drop the first 90 characters I type. I've seen rumors that microsoft will replace it, and I cannot wait until that happens.


Outlook as a native application on Windows 10 on a recent Dell laptop is so slow that I have deleted the wrong e-mail in my inbox because I'll hit the trashcan icon and by the time Outlook notices, it's added new messages, moved things around, and then think that I clicked the icon on the message that now appears where the original did.


This is a major problem with Outlook now. I’ve done it several times, where Outlook is thinking and moving stuff around between when I target the thing I want to hit and when I move the mouse.


I remember Outlook on windows 10 actually adding animation to my typing to smooth out the flow of words. I disabled that immediately and I’m usually pro eye candy, but that was a step too far.


same. it was the dumbest feature i've ever seen.


Glad I'm not the only one experiencing this. I have a brand new i7 Mac and outlook is laggy just switching between emails or inboxes.

Also, if I click the "Switch to New Outlook" button, it says that it can't copy over my custom IMAP accounts for work. I would think that supporting things besides exchange or gmail accounts would be something they would do before releasing a new version.


Weirdly enough it seems like Outlook on the web is somehow faster than the Windows version. It might be because lots of email uses HTML and Outlook is using an ancient version of HTML. I am very impressed with developers who can make things consistent in Outlook as well as actual browsers.


Outlook web is slow as molasses. In the desktop is literally unusable for me (It never opens my account). Both things were superior experiences in 1999.


Outlook on the web seems to be getting most of the development effort, in part because supposedly its parts are increasingly shared with Windows Mail/Calendar (aka "Mobile Outlook") through supposedly React Native, but also in part because apparently that's just where most users use Outlook in 2021 (even in many MS365 shops, supposedly, there are a bunch of companies that prefer the web app).

There have been a bunch of interesting rumors that Microsoft is planning to hollow out the insides of Outlook Desktop (anything that isn't nailed down to big corporate contracts and their extensions), and directly replace those guts with Web Outlook via React Native or something like it.


I think at this point they could hollow out Outlook and replace it with a guy who draws the interface on a whiteboard and then sends me a photo of it. That might have similar round-trip latency. /s

Really, a web app wrapped in a desktop app would be fine if it could perform better. I don't even need good, just better.


Its really quite funny, as outlook used to be the 'killer feature' for an operating system, now it just makes people want to be a killer.


This is a specific nickpick but you won’t make me miss Outlook desktop. It’s crazy old and big, and for basic email stuff, its web app counterpart is much faster.

But anyway in the enterprise sector, it doesn’t matter whether an app is web or native, it will be slow regardless lol.


And just to confirm the forces in play here: enterprises care primarily about business outcomes of software, license cost, and support risk, with end-user experience being very far down the priority list except for a very few productivity applications where UI responsiveness actually matters for increasing employee output (fewer than you’d think). In short, the users aren’t the customers.


Yup. That's exactly why enterprise software almost universally sucks.

This could really be applied to any good or service where the purchaser is not the end user. For example, in the U.S. dealing with your health insurance company is a nightmare, and a lot of that has to do with the fact that it's your employer who's the customer. If the health insurance company treats you badly, you can't go with another provider, so they're free to offer terrible service so long as they don't piss of your company's HR department who decides which health plans to go with.


> ...except for a very few productivity applications where UI responsiveness actually matters for increasing employee output (fewer than you’d think).

If there is UI, UI responsiveness matters for employee output.

Research that has been done on this topic suggests that increase in UI latency non-linearly decreases user productivity, whith the ultimate effect on the cost of doing business.

And that has been known for decades - take a look at the "The Economic Value of Rapid Response Time" from 1982:

https://jlelliotton.blogspot.com/p/the-economic-value-of-rap...

It's puzzling to me why businisses still don't prioritize UI latency, but it's not a rational decision.

Perhaps it's just human nature, as hinted in the linked article:

"...few executives are aware that such a balance is economically and technically feasible."


Can someone explain why from the mobile version of Outlook (OWA) I can't send an email marked with Urgent/High priority/importance?


Thats only for managers.


nitpick. Yes, this is me nitpicking XD


I have no good theory about why that is except that maybe more and more business people are under the illusion that "software is being increasingly commoditized" which is of course not true.


> ..."software is being increasingly commoditized"...

I only wish this were true; then value propositions for software could climb a value ladder. The challenge is business' are not standardized beyond some very basic functions, and new standardization comes at a brutally high cost (time and expense). So I see where office productivity has settled on Microsoft Office (though even there, I see huge fragmentation between versions, how people don't use styles, how most people have no idea of pivot tables in Excel, etc.), and we've pretty much just crawled along at a snails pace since then.

If anything, judging by how little I can transplant of business processes that emerge around software from one company to another, and how much those processes mutate over time, I would assert software standardization is getting worse, because getting businesses to standardize even when moving to the cloud has been a bigger challenge than I anticipated.


Their perception sort of perpetuates it.

I've seen devs arguing this, though IMO that is more the devs speaking out of resignation and learning to say the right things rather than the truth.


Is it possible to use the web only as a platform to deliver the newest version of your native application?

- User visits website - downloads binary (preferably small size, use an appropriate language and cross-platform graphics library) - launches it (preferably without installation) - Perhaps creation of a local storage directory on the file system is needed the first time. - and voilà!

What would be the main obstacles to such a workflow? Are there projects who try work like this?


Zoom?


It is awful, but there are some positive tradeoffs like security and flexibility. For example, there have been a zillion vulnerabilities with native Office over the years. Visual Studio is a terrible pain to skin or customize its look and feel compared to VS Code.


I feel the same way. We have way too many people working on tooling who don't know how to properly make things fast.

On some days, I manage to type faster than XCode can display the letters on screen. There is no excuse for that with a 3 GHz CPU.

And yes, 200ms seems plausible to me:

Bluetooth adds delay over PS2 (about 28ms). DisplayPort adds delay over VGA. LCD screens need to buffer internally. Most even buffer 2-3 frames for motion smoothing (= 50ms). And suddenly you have 78 ms in hardware delay.

If the app you're using is Electron or the like, then the click will be buffered for 1 frame, then there's the click handler, then 1 frame of delay until the DOM is updated and another frame of delay for redraw. Maybe add 1 more frame for the Windows compositor. So that's 83ms in software-caused delay.

So I'd estimate a minimum of 161ms of latency if you use an Electron-based app with a wireless mouse on a DisplayPort-connected LCD screen, i.e. VSCode on my Mac.


The IDE is an extreme case of user interface.

You type in a letter and that starts off a cascade of computations, incremental compilation, table lookups, and such to support syntax highlighting, completion, etc. and then it updates whatever parts of a dynamic UI (the user decides which widgets are on the screen and where) need to be updated.

It almost has to be done in a "managed language" whether that is Emacs Lisp, Java, etc. and is likely to have an extension facility that might let the user add updating operations that could use unbounded time and space. (I am wary to add any plug-ins to Eclipse)

I usually use a powerful Windows laptop and notice that IDE responsiveness is very much affected by the power state: if I turn down the power use because it is getting too warm for my lap, the keypress lag increases greatly.


If kicking off incremental conpilation is causing the IDE's UI to behave sluggishly, then the IDE is wrong. The incremental compilation or other value-adds (relative to a text exitor) should not create perceptible regressions.

Table lookups for syntax-highlighting can't be backgrounded, but they should be trivial im comparison to stuff like compilation, intellisense, etc.


I'm a bit of a language geek but I've always been confused by IDE lag, so I figure there's something I don't know.

From a UX perspective, I can see doing simple syntax highlighting on the UI thread...so long as it is something with small, bounded execution time. I don't quite get why completions and other stuff lags the UI thread, as it seems obvious that looking that information up is expensive. I can't tell if that is what's happening, or there's something more going on, such as coordinating the communication between UI/worker threads becomes costly.

I've seen it in a bunch of IDEs though, especially those in managed languages. You're typing, it goes to show a completion, and then....you wait.


I’m amazed at how much faster Rider seems to be than Visual Studio at its own game. Intellisense is way slower than the C# IDE made by the people who make Resharper. Resharper in visual studio is always really slow though.


> DisplayPort adds delay over VGA

Surely VGA would have more latency than DP for an LCD? It's gotta convert from digital to analogue and then back to digital again at the other end.

Is the overhead of the protocol really greater than that? (genuine question)


I meant to compare DP+LCD vs. VGA+CRT.

But to answer your question, digital to analogue and analogue to digital conversions tend to be so fast that you don't notice. It is more of a convention thing that most VGA devices will display the image as the signal arrives, which means they have almost no latency. DP devices, on the other hand, tend to cache the image, do processing on the entire frame, and only then start the presentation.

As a result, for VGA the latency can be less than the time that it takes to send the entire picture through the wire. For DP, it always is at least one full transmission time of latency.


DP does not require buffering the entire frame. Data is sent as "micro packets". Each micro packet may include a maximum of 64 link symbols, and each link symbol is made up of 8 bits encoded as 8b/10b. The slowest supported link symbol clock is 1.62Gb/s, so even considering protocol overhead there are always millions of micro packets per second.

If the required video data rate is lower than the link symbol rate the micro packets are stuffed with dummy data to make up the difference, and up to four micro packets may be sent in parallel over separate lanes, so some buffering is required, but this need only add a few microseconds of latency, which is not perceptible. Of course it's possible for bad implementations to add more, but the protocol was designed to support low latency.


Thank you for teaching me something new :) I didn't know about micro-packets before.

In that case, I'm guessing the latency is coming from the fact that most LCD screens are caching one full image so that they can re-scale it in case the incoming video resolution isn't identical with the display's native resolution.

I vaguely remember there being an experimental NVIDIA feature to force scaling onto the GPU in hopes of reducing lag, but not sure that ever got released.


To be fair, it's only "almost no latency" if you just care about the pixels at the top of the screen. Since CRTs (and LCDs) draw the image over the course of a full frame, it's more fair to say 8.3ms, since that's when the middle of the screen will be drawn (at 60Hz). This is pretty comparable to modern gaming monitors, which have around 8.5-10ms of input delay @60Hz.

Where CRTs do have an advantage over LCDs is response time, which is generally a few ms even on the best monitors but basically nonexistent on CRTs.

But overall, a good monitor is only about half a frame worse than a CRT in terms of latency if you account for response time. At higher refresh rates it's even less of an issue; I'm not aware aware of any CRTs that can do high refresh rates at useful resolutions.

Got my numbers by glancing at a few RTINGS.com reviews: https://www.rtings.com/monitor/reviews/best/by-usage/gaming


Conversions between analog and digital happen in nanoseconds. They happen as the signal is sent.


MacOS' compositor is waaay worse than Windows'. On MacOS everything feels like it's lagging for 200ms.


161ms is longer than it takes to ping half way around the world. Amazing.


That's why most people don't notice any performance issues with Google Stadia / Geforce Now. They are conditioned to endure 100+ ms of latency for everything, so an additional 9ms of internet transmission delay from the datacenter into your house is barely noticeable.


161 ms is 1/6th of a second which I would have thought would be noticeable and yet I haven't noticed it. I assume that is mouse clicks?

I'm sure Id notice if typing had that much lag on vs code. I am using manjaro Linux but I can't imagine that it would be much faster than osx.


Fighting gamers are generally able to block overhead attacks (so they see the attack and successfully react by going from blocking low to blocking high, after waiting for the delay caused by software and the LCD monitor and their own input device) that take 20 frames or more. That's 333ms. So I think if you were really paying attention to the input delay instead of trying to write software you would end up noticing delays around the 160ms level, idk.


333ms is ages! I can react way faster than that on a touchscreen. I bet you can too:

https://humanbenchmark.com/tests/reactiontime


Yes. The players are trying to react to a bunch of other things, not just 1 possible move. It's in this context that 20 frames is the cutoff where moves start to be considered "fake" (i.e. getting hit is an unforced error)


Just trying in VS Code again, and there does seem to be a lag for mouse clicks. Not sure if its as much as 1/6s, but probably 1/10. Typing though looks as snappy as any terminal.

I get electron or MS have optimised the typing path. I don't click that much in VS Code so I don't think its ever bothered me.


Typing in VSCode is high latency as well, I find it viscerally unpleasant to use solely due to this. There's already a ticket: https://github.com/Microsoft/vscode/issues/27378


And some video games with good hardware manages less than 20-30ms button to pixel response.


> Maybe add 1 more frame for the Windows compositor.

Months ago I noticed picom causing issues with keynav I was too lazy to find a (proper, pretty-window-shadow retaining) fix for, so I just killed it and — while I can’t confidently say I remember noticing a significant lag decrease — I can say I don’t really miss it (and my CPU, RAM, and electricity use almost certainly decreased by some small fractions).


Being a Go/C/Scheme coder makes me not tied to an ide, and it runs fast. Zero latency.


I just used IDEs as an example. You'll have the same latency issues with WhatsApp, Signal, Slack, Deezer, for example.


Being an anti social GNU/Xorg*/SystemD/Archlinux nerd means I don't have to use any of those.

* - actually it could be Wayland but doesn't work with my old window manager config.


> Everything feels like it has at least 200ms delay injected, on every transition. I'd honestly pay extra for an iPhone

If you are using Android, you are in luck.

1. Open Settings > About Phone, Tap the build number 7 times (Or google other methods to open Developer menu for your phone model)

2. Go to Developer options -> Drawing

3. Set all animation scale to 0.5x

You'd be amazed to find how fast the phone appears


You pretty much nailed it here. It's not speed proper. It's the perception of speed. What the iPhone mastered was the transition starting right away. If you have no transition, the time to start, say, the mail app, will appear long, but since you started the icon blowing up to cover the screen right after your finger press was detected (by your brain) the delay feels shorter because you see something is happening. It's merely cosmetic - the app is still starting during the animation - but , to the user, the animation is part of the process.


Err, seems like you got it the wrong way around. That was initially the reason, but these days the animation ends up taking much longer than the actual processing. GP's workaround changes the animation durations to be somewhat closer to the actual time required.

But even that's overkill for modern phones. I just tried turning off animations entirely, and things still feel pretty much instant, despite the phone being a few years old at this point.


I guess my phone doesn't have enough speed to make lack of animations feel instantaneous ;-)

In any case, the animation shouldn't take longer than it takes to start the program.


I actually went back to normal speed. Sure, fast animations, but it makes stuttering more noticeable because there isn't a slow animation to cover them up. My phone is a bit old, maybe that's worth it if you have one of the latest flagships with plenty of computing power.


You can also disable animations in the same settings, but I found it broke some applications.


TY! I used to put my phone into low battery mode sometimes just to get the speed up from disabled animations.


I have Phillips Hue and Sengled lights at home and I usually disable the "easing" animation on them to reduce the perception of time delay when I push the button... It is maybe 100 of ms of perceived latency I can subtract.

It help a lot in that "computer user bill of rights" issue that you start to worry at some point that the button press wasn't registered and might then mash the button with unpredictable effects.

(e.g. you might get more customer satisfaction from a crosswalk button that doesn't do anything at all except 'click' instantaneously)


Funny, because I purposefully bought dimmer switches for bathrooms in my house that added a bit of ramp up time when turning lights on! (Makes it less jarring to turn on the bathroom lights at 2am with just that fraction of a second)


How do you disable the easing?


This is the first thing I do when I get a new phone. How the default is as sluggish as it is is beyond me.


This is just a PSA to warn people that this can fail: I just tried this in my lunch break. I have LOS 17.1 on surnia (old, I know).

These settings completely disabled my on-screen home button and other UI elements, and setting the anim scale back to 1.0 and rebooting did not fix that, no more home button for now.

I probably have to reset the phone, did not find any further info so far on how to fix it (pointers, anyone?). But the UI seemed snappy indeed at 0.5 ...

Edit: "other UI elements" including e.g. the Tab switcher in the Lightning browser. The widgets are all displayed, but totally unresponsive.


Solved (?) - I booted into TWRP and rebooted again from there, and the UI elements work again. (No clue what the exact problem was.)


IT Crowd - Have You Tried Turning It Off And On Again?

https://www.youtube.com/watch?v=nn2FB1P_Mn8


Oh yeah, I am aware of that but not using Android for 4 years now. But I think I'll buy a cheap Xiaomi device and play with Android again. Xiaomi optimize their phones quite a bit (even if you have to fight with their ROM to be less spyware).


I'd rather wait a small bit every time than getting a full blown spyphone, but scaling animation times down does improve the feel quite a lot


Fair, but it won't be my main device. Still, you have a point.


Is OnePlus also in this category?


No. Not perfect, not bad.


Oooh, thanks for this. I just applied the animation scale on my Pixel 4A and it feels so much peppier.


Thank you. Feels like a new phone. Disabled all animations.


I definitely hear you. As a heavy gamer myself, and a person who likes to do things fast to avoid slowing down my train of thought, our current tools are insanely slow.

The researchers telling me I don't notice 100ms delays are smoking something. Yes, human reaction time is 200ms on average but we process information much faster than that. Moreover, the delays make it impossible to do "learned" chains of actions cause of the constant interruptions.

Hackers typing insanely fast and windows popping up everywhere in movies? The reason why that looks very unrealistic is just that our tools do not behave like that at all.


Those researchers never played Quake2 / Quake3 / Unreal Tournament.

You can absolutely detect when your ping gets above 25ms even. It can't be missed.

> Hackers typing insanely fast and windows popping up everywhere in movies? The reason why that looks very unrealistic is just that our tools do not behave like that at all.

Right on. That's why, even though I have an insanely pretty Apple display (on the iMac Pro) I move more and more of my day work to the terminal. Those movie UIs are achievable.

Related: I invest a lot of time and energy into learning my every tool's keyboard shortcuts. This increases productivity.


I would argue that it's more noticeable in those older games where they weren't using lag compensation and you had to lead your shots in order to hit other players. If you're testing on a game which has rollback netcode then lag matters less because the game is literally hiding it from you.

What task is actually being measured here matters, too. For example, while it is true that humans cannot generally react faster than 100ms or so; most actual skills being tested by competitive gameplay are not pure reaction tests. They are usually some amount of telegraphed stimulus (notice an approaching player, an oncoming platform, etc) followed by an anticipated response. Humans are extremely sensitive to latency specifically because they need to time responses to those stimuli - not because they score really well in snap reaction tests.

Concrete example: the window to L-cancel in Melee is really small - far smaller than humanly possible to hit if this was purely a matter of reaction times. Of course, no player actually hits that window, because it's humanly impossible. They don't see their character hit the ground and then press L. They instead press L several frames in advance so that by the time their finger presses the trigger, their character has just hit the ground and made the window. Now, if I go ahead and add two frames of total lag to the display chain, all of their anticipated reactions will be too late and they'll have to retrain for that particular display.


All true. IMO the point is that people actually made effort for things to both be fast and seem fast. Unlike today.


And input lag (eg. local, mouse-to-screen lag) gets you before that.


>> Moreover, the delays make it impossible to do "learned" chains of actions

Yeah this resonates for sure. Multiple times per day i tell citrix ctrl+alt+break, down arrow, return (minimise full screen citrix, go to my personal desktop) and about 50% of the time an app inside the citrix session will be delivered the down arrow, return keystrokes :-/


This. Any application that doesn't properly queue the user inputs gets my eternal hatred. Either your application needs to work at the speed of thought, or it needs to properly queue things so when it catches up it executes my commands in order.

Surprisingly, I find MS Windows native stuff to be head-and-shoulders the best at this queuing.


The star menu itself seems to fail at this. And pin entry on a locked windows machine seems random whether it accepts the first keystroke as part of the pin or not.


Game developers know how to make smooth and performant UI, to say nothing of the rest of what goes into writing a game engine, particularly a fast GPU-accelerated engine. I’m starting to think it’s primarily a cultural thing, where it’s just become acceptable in the web dev and Electron app world to ship sluggish, resource-intensive apps. I also feel like more corners are cut and performance issues swept under the rug when devs are not staring down the barrel of the hardware on a daily basis.


I used to write 4K demos and the like in assembly, and I wrote a 3D engine in the era where you still thought hard about making something a function call or not because... you know... those fractions of a microseconds add up, and next thing you know you've blown your 16.6ms frame time budget!

These days I see people casually adding network hops to web applications like it's nothing. These actually take multiple milliseconds in common scenarios such as cloud hosting on a PaaS. (I measured. Have you?)

At that point it's not even relevant how fast your CPUs are, you're blowing your "time budget" in just a handful of remote function calls.

If you stop and think about it, the "modern" default protocol stack for a simple function consists of:

    - Creating an object graph scattered randomly on the heap
    - Serialising it with dynamic reflection 
      ...to a *text* format!
      ...written into a dynamically resizing buffer
    - Gzip compressing it to another resizing buffer
    - Encrypting it to stop the spies in the data centre
    - Buffering
    - Kernel transition
    - Buffering again in the NIC
    - Router(s)
    - Firewall(s)
    - Load balancer
and then the reverse of the above for the data to be received!

then the forward -- and -- backwards stack -- again -- for the response

If this isn't insanity, I don't know what is...


You're missing the point. You're talking about the fast part which in any well optimized application is never going to be slow enough to matter. The problems start when you sprinkle 0.5MB libraries all over your code base and you start doing an excessive amount of HTTP calls.

What you are doing is like a machinist complaining about a carpenter not measuring everything in thousands of an inch or micrometers. The reality is that wood is soft and can shrink or grow. It's maybe not the best material but it's good enough for the job and it's cheap enough that you can actually afford it.


The problem with this analogy is that it makes sense to work with lower quality materials in real life, because the cost savings scale with the number if units you produce.

With web content it’s the exact opposite. Every time you are a bit lazy, and add another mushy, poorly optimized dependency, the cost is paid by every one of your users.

The better analogy is that the web is like an assembly line that serves content. Do you want wooden equipment with poor tolerances making up that assembly line which takes twice as long and occasionally dumps parts on the ground, or do you want a well-optimized system working at peak efficiency?


You actually want what you can afford. A shitty product in the market beats a great product on localhost.


A lot of the problems with web development have nothing to do with time to market. There's no technical reason you could not have a toolset which is just as easy to use, but far more performant.


So if it isn't easier to use, and less performant, why are these poor toolsets being chosen?


History and inertia


That would explain why they continue to be used after initial adoption. It doesn't explain why they were initially chosen if there were better options using something that already existed.

History and inertia also are nearly synonymous with "easier to use" in this context.


Because its the new hotness.


You,re pointing the blame at a source of EVEN WORSE performance issues, but it doesn,t remove the slowdown described.

Plain HTML renders several order of magnitudes faster than post-load JS rendering, and yes, it is noticeable, especially if you account for variable connection speeds.

Most web devs develop on localhost and test on some of the best connections you can get today, leaving network performance testing as an afterthought at best... and it shows.


> Plain HTML renders several order of magnitudes faster than post-load JS rendering

Well, "several orders of magnitude" is a bit much, but the point stands.

However, that's only during the initial load. After that, JS can just keep modifying the DOM based on the data retrieved from API, and never download HTML and construct new DOM again. If done properly (and that's a big if!), and where appropriate, this can be much faster.

> Most web devs develop on localhost and test on some of the best connections you can get today, leaving network performance testing as an afterthought at best... and it shows.

Very true! And on beefeir CPUs/GPUs, more RAM, faster storage etc.

For the last couple of years, I've been careful to develop on "midrange" hardware, exactly so I can spot performance problems earlier.


> However, that's only during the initial load.

Primary and by far most frequent use case.

> After that, JS can just keep modifying the DOM based on the data retrieved from API, and never download HTML and construct new DOM again.

And then you can never return to the same page again, it's gone into the either, and the Back button doesn't work properly.

Anyone who doesn't support JS to the level you want? Well, fuck those people, let them make their own wheelchair ramps.

> If done properly (and that's a big if!), and where appropriate, this can be much faster.

A big IF, indeed.


I think you have "document paradigm" in mind.

For "application paradigm" my points stand. That's where JS is appropriate. I did say "where appropriate", after all.

> Primary and by far most frequent use case.

In document paradigm.

> And then you can never return to the same page again, it's gone into the either, and the Back button doesn't work properly.

Not if the client-side routing is done properly. I did say "if done properly".

> Anyone who doesn't support JS to the level you want?

With modern transpilers, you can produce lowest-common-denominator JS. Essentially you are treating JS as a build target / ISA.

> Well, fuck those people, let them make their own wheelchair ramps.

What's the alternative? They can download a native app, but that doesn't work for everyone either (both from the developer and the user perspective).


The alternative is HTML, which is accessible to most.


Hear, hear!

And not only is the stack you describe full of delays, several of the layers are outside of the control of the software in question and can just… fail! Sure, there are cases where I need my software to communicate with the outside world, but I get furious when some page with text on it dies because somewhere in a datacenter some NIC failed and thus the shitty webapp I was viewing fell over.


Developers use what is available off the shelf. If there is no easy and straightforward way to send data with a client code over the wire, they will send “function onload() { unjson(await xhr(endpoint, tojson(data))) }”. Blame should go to stupid runtimes, not developers.

You were motivated by submitting a cool demo, they are motivated by not being fired after deadlines. An additional network hop is nothing compared to not shipping.


Or there's nobody to blame and we're stuck in a very shitty local maximum. Developers want to deploy to every device on the globe instantaneously, users want to get their software without having to fight with the IT department, and while everybody was looking at the JVM as the runtime to beat the browser was picking up features like some demented katamari.

When I look at the massive backlog of requests from my users, not a single one is "speed."


I was referring to API calls between server components of what is essentially a monolithic application.

I've recently come across several such applications that were "split up" for no good reason. Just because it's the current fad to do the microservices thing. Someone liked that fad and decided that over-architecting everything is going to keep them employed.

To clarify: This was strictly worse in every possible way. No shortcuts were taken. No time was saved. Significant time and effort was invested into making the final product much worse.


Hello,

Can you tell what is your occupation? Are you dealing with assembler level programming regularly?


Not any more, these days I do various kinds of systems integration work and I still dabble in development, but mostly with high-level languages like C#.

It just grinds me gears that we have all these wonderfully fast computers and we're just throwing the performance away.

My analogy to customers where I consult is this: What you're doing is like buying a dozen sticks of RAM, and then throwing ten of them into the trash. It's like pouring superglue into all but a couple of the switch ports. It's like buying a 64-core CPU and disabling 63 of those cores. It's like putting some of the servers on the Moon instead of next to each other in the same rack.

Said like that, modern development practices and infrastructure architectures suddenly sound as insane as they truly are.


I totally agree. I think about it like, you spend $3000 on a computer. $100 goes into actually doing your computing. The rest is thrown away by lazy programmers who can’t be bothered to learn how a profiler works. Most software is written the same way a lazy college student treats their dorm room - all available resources (surfaces) are filled before anything gets cleaned up. Getting a bigger room provides temporary relief before they just make more mess to fill the space.


Wirth's law is a reality, an awful, horribly annoying one


"can't be bothered to learn how a profiler works"

To be fair, profiling is way more difficult than it was in the days of single-core local applications. A single-threaded single-machine application means you can get a very clear and simple tree-chart of where your program's time is spent, and the places to optimize are dead obvious.

Even if you're using async/await but are basically mostly releasing the thread and awaiting the response, the end-user experience of that time is the same - they don't give a crap that you're being thoughtful to the processor if it's still 0.5s of file IO before they can do anything, but now the profiler is lying to you and saying "nope, the processor isn't spending any time in that wait, your program is fast!".


> To be fair, profiling is way more difficult than it was in the days of single-core local applications.

Not if you graduated from the printf school of profiling[1].

Measure the time when you start something, measure the time when you finish, and print it. Anything that takes too long gets a closer look.

[1] unaffiliated with the printf school of debugging, but coincidentally located at the same campus.


From MCU programmers, I know you can make even a microcontroller run around a Xeon if you know how you can squeeze every cycle of performance, and exploit particularly hard tasks to optimise.

Write a riddle for a CPU with 100% cache miss rate, confusing the prefetcher to clog the memory bus, and enforcing a synchronous memory access. Such thing is very likely to run literally with an MCU speed on an x86 PC CPU.


Well yea and no, ideally you are not “throwing” that RAM away, you are paying for a more flexible software that can be more easily changed in the future, or to be able to pay much less for your developers, often both.

Nobody wants slow software, its just cheaper, in upfront and maintenance costs. Going with analogies, its like a race car mechanic complaining that a car is using like 3 cylinders where it could have 8. Sure but some people have other priorities I guess.


> you are paying for a more flexible software that can be more easily changed in the future

In theory yes, in practice this almost never happens. 95% of the teams just quickly mash the product together and peace out before anyone notices what mess did they make. And then you have some poor Indian / African / Eastern European team trying to untangle and improve it.

Seen it literally tens of times over a course of 19 years career.

> Nobody wants slow software, its just cheaper, in upfront and maintenance costs

That is true. But nowadays it's more like taking a loan from the bank and running away to an uninhabited island to avoid paying it off.


> In theory yes, in practice this almost never happens. 95% of the teams just quickly mash the product together and peace out before anyone notices what mess did they make.

Much of my work is in highly parallelized computing (think Spark across thousands of nodes) processing 10s or 100s of TiB at a time with declarative syntax. It's super cool. Until someone decides they're going to use this one line expression to process data because it's just so easy to write. But it turns out doing that absolutely destroys your performance because the query optimizer now has a black box in the middle of your job graph that it can't reason about.

Bad practices like that occur over and over again, and everyone just figures, "Well, we have a lot of hardware. If the job takes an extra half hour, NBD." Soon, you have scores of jobs that take eight hours to run and everyone starts to become a little uneasy because the infrastructure is starting to fail jobs on account of bad data skew and vertexes exceeding the predefined limits.

How did we get here? We severely over-optimized for engineer time to the detriment of CPU time. Certainly, there is a balance to strike, no doubt. But When writing one line of code versus six (and I'm not being hyperbolic here) becomes preferable to really understanding what your system is doing, you reap what you sow.

On the plus side, I get to come in and make things run 5x, 10x, maybe even 20x faster with very little work. It sometimes feels magical, but it would be preferable if we had some appreciation for not letting our code slowly descend into gross inefficiency.


Death by a thousand paper cuts. Classic.


Maybe it didn’t really come across I am totally in the performance camp and love to be able to craft a beautiful, lean and responsive UI if nothing else than for seeing the joy on users’ faces when they are delighted (amazed!) that what they wanted done happened so fast.

But time and time again I see that projects with a fast “enough” interfaces and flexible systems win out on more specialized, faster ones. And I hate that but here we are. Sometime we see a really performant piece of software hit the sweet spot of functionality for a while (for example sublime text) but then get overtaken by a fast enough but more flexible alternative (vacode)


>Eastern European

Eastern European coders are highly competent, they did magic back in the day with just a ZX Spectrum.


As an Eastern European programmer, I agree. A lot of us are called to fix messes left by primadona devs (who are taking home $200K a year for the privilege of making other people's lives a living nightmare).


To be fair, most of those "primadona devs", as you call them, would much prefer to write well-designed programs cleanly coded, but are given completely unreasonable timeframes and staffing then told to create an MVP then turn it over to offshore.

Very few people enjoy producing junk, but management (and customers) often demand junk today rather than quality tomorrow.


Primadona dev here :)

>> most of those "primadona devs", as you call them, would much prefer to write well-designed programs cleanly coded

Most of them - yes. But there's a non-negligible chunk of them who are too careless or incompetent to care about quality - they've been around long enough to gain knowledge about project and get Vice-President title(inflated ego included).

It is especially visible in big banks (I suppose it's typical for other big non-tech corps as well) where tech culture is generally on poor side.

edit: grammar


Obviously neither me nor you can generalize -- both extremes exist.

Given the chance I'd likely collect a fat paycheck and bail out at the end of the contract as those other people did. But that attitude is responsible for the increasingly awful mess that modern software is becoming.

Almost everyone is at fault, me included. The perverted incentives of today's world are only making things worse.


Hah true dat. Been my life for the last couple of years :-D Managed to pull through a project that “failed” two times and was 2.5 years behind schedule...


Given the state and culture of web development, it's honestly a travesty that most software is consumed via the web currently.

I mean the web stack itself was never designed per se. HTML is essentially a text annotation format, which has been abused to support the needs of arbitrary layouts. The weakness of CSS is evident by how difficult it has been to properly center something within a container until relatively recently. And Javascript was literally designed in a week.

And then in terms of deploying web content, you have this situation where you have multiple browsers which are moving targets, so you can't even really just target raw HTML+CSS+JS if you want to deploy something - you need a tool like webpack to take care of all the compatibility issues, and translate a tool which is actually usable like React into an artifact which will behave predictably across all environments. I don't blame web developers for abusing libraries, because it's almost impossible to strip it all down and work with the raw interfaces.

The whole thing is an enormous hack. If you view your job as a programmer as writing code to drive computer hardware - which is what the true reality of programming is - then web development is so far divorced from that. I think it's a huge problem.


What about those weirdos who deliberately choose to use the abomination that the web stack is for desktop apps? To me it feels like they're trying to write real GUI apps in Word macros. I don't think I'll ever understand why.


The reason is there is an explosion of platforms to support. Back in the '90s, "windows desktop only" was a reasonable business plan.

Now? You need Windows desktop, mobile on 2 different operating systems, web, MacOS, and possibly TV depending on your market.

What's the lowest common denominator? Web stack.


There's also Qt :)


Or Java


Or... and I know this is just crazy-talk... there is properly separating your platform-independent business logic from the minimal platform-specific UI layer. A lost art these days it seems.


If CorelDRAW were installed on every phone and given same privileges, they’d use that. A new type of browser is like a social network – relatively easy to build one, insanely hard to get it adopted by everyone. The alternative is building for at least 4 different platforms, whose common denomination is usually either a non-barking dog or a vendor-locked monstrosity not even worth considering. And existing web browsers and committees are digging their heels in the status quo.


I've met plenty of people that prefer to write GUIs in Excel macros. If all you know about is a hammer...

I only have a problem with the ones among those hammer only people that are proud of not knowing anything else and proclaim everybody not using a hammer for everything stupid, because "look on all those perfected hammers we created! your choice doesn't have such nice ones".


Oh yeah, I've seen that before. Someone made a random password generator GUI in excel for people to use at one of my previous jobs


In some ways I can I understand it, because if you want to deploy a GUI application which mostly consists of text and pictures across multiple platforms, this is probably most viable option in a lot of cases, but the fact that this is the case is a failure of the market and the industry


Yep. Native software development houses never invested enough in making a cross platform app toolkit as good as the web. There’s no technical reason why we don’t have something like electron, but lightweight and without javascript. But native-feeling cross platform UI is really hard (like $100M+ hard) and no individual company cares enough to make it happen. I’m sure it would be a great investment for the industry as a whole, but every actor with the resources is incentivised to solve their problems using different approaches. It’s pretty disappointing.


I don't think it's at all possible to make cross-platform GUIs that feel native. It's of course fine to share the core of your application across platforms, but you have to make the UI part separately for each platform for a truly nice result. There's no escaping that. And it's not like companies like Slack and Discord lack the resources to do so — they absolutely deliberately continue stubbornly ignoring the fact that, setting aside excessive resource usage, no one likes UIs that look and feel out of place in their OS. They totally have the resources necessary to rewrite their apps to use native UI toolkits on all supported systems.


I don't know engineers from in there but I am willing to bet $100 that part of them really want to make native OS UIs. It's just that business will never green-light that as a priority.


Although I'm not a huge fan of it, you could argue that Flutter is trying to solve this problem in some ways and has the right backing to be able to pull it off. It unfortunately doesn't feel native though (apart from on Android).


Qt and wxWidgets are still out there. But big money is flowing through the web, so web technologies spread with it.


Qt still feels not quite right on macOS — because it draws the controls itself instead of using the native ones. wxWidgets is the best of the bunch, because it apparently does wrap AppKit into itself, but then again, the layouts apps use give away that it's a cross-platform thing.


Because it works everywhere.


* As long as everywhere is a recent device that can run the latest version of an "evergreen" web browser


> The weakness of CSS is evident by how difficult it has been to properly center something within a container until relatively recently … you can't even really just target raw HTML+CSS+JS if you want to deploy something - you need a tool like webpack

This stuff was fixed at least 5 years ago. If you can drop support for IE11 (released in 2013 and no longer supported by Office 365), you’ll find that framework-free web development has improved massively since React was first released. And if you keep it simple and rely on what browsers support natively, you can achieve great performance.


You'd be surprised how many games up until recently used Flash (Scaleform GFx), and now in some cases HTML5 (edit: Coherent GT/Hummingbird/Gameface) content for game UI.

Rendering hundreds or thousands of meshes and doing complicated 3D math for physics is no problem, UI is still extremely hard and complex, especially if you are supporting multiple arbitrary resolutions for example.

Godot, for example, has a full UI toolkit built in (the Godot editor was made using Godot components). However to actually get it working the way you want in most cases is a horrendous struggle, a struggle with ratios, screen sizes, minimum and maximum UI control sizes, size/growth flags, and before it gets any more complicated please just throw me a Tailwind flex/grid box model instead, because HTML/CSS has solved these problems repeatedly already.


I've started noticing a weird counter effect. If you make a web app that is snappy and responsive, people just assume your app is trivial. Users have effectively been trained into thinking things like list pagination are "difficult" operations.


Maybe that's like the tech equivalent of enjoying a loud vehicle because it sounds more powerful than a quieter one. (In reality, the quieter one is more efficient than the louder one.)


VS Code uses Electron and I can't say I've noticed any performance problems with it - indeed it is quite a bit faster for me than its native-code relative Visual Studio.

So responsive Electron apps are certainly possible.


I'm very interested in the general perception of VS Code being fast, because for me it's slow enough that it's the main reason I use other editors. Here are a couple of examples:

1. It takes nine times as long as Vim to open a minified JavaScript file, and then format it with Prettier: https://twitter.com/robenkleene/status/1285631026648276993

2. It takes 14 times as long to open an empty text file than BBEdit: https://twitter.com/robenkleene/status/1257724392458661889

Both of the above examples revolve around opening files for the first time, and I suspect a lot of the slowness I perceive is because I open a lot of different projects and source code files when I'm working, and this is a bad use of VS Code.

In practice, VS Code behaves more like a multi-language IDE than a text editor. Slow startup times are generally acceptable in IDEs because you're exchanging speed for power. A programmer should ideally be proficient in both an IDE and a text editor, because they're tools applicable to different problems. E.g., VS Code is a terrible choice for things like analyzing log output, formatting large files, testing isolated snippets of code, or working on source code files that aren't part of the same project. I find this to be a shame because VS Code is flexible enough that it would otherwise be excellent for all of these tasks if it were just more performant for some operations that it struggles with now.


Out of interest do you mean starting a new instance of VS Code for those things or using an existing one.

I would agree that VS Code isn't the fastest thing when the editor is starting up, though I find it fine when started. I pretty much always have VS Code running so I don't find this a problem.


VS Code is already running in both examples.

A lot of the overhead seems to come from making a new window (even though the app itself is already running), although notably most of the time spent in the Prettier example seems to be spent syntax highlighting the JavaScript. If you want to try a direct comparison of opening a file vs. a window, you can see the difference between opening a new file in an existing window (on Mac, `⌘N` / `File > New File`) or new window (on Mac, `⌥⌘N` / `File > New Window`). For me the latter is far slower than the former.


Vs code is an antiexample here.

The whole point for then from the start was to not to repeat the Atom fiasco.

The entirety of the project was running around of making Webkit not suck.

They spent ennormous effort on that.


That being said, I immediately notice when switching from Sublime to VS Code. It’s something in the key presses...

I think it’s only noticeable if you’ve used a native application for a while. It’s not enough to go from VSC to Sublime and back to VSC again for five minutes. Make an effort to use a native app for a week or a month and then switch back.


I noticed this a bunch when I moved from emacs to Jupyter notebook.

Emacs will sometimes become slower (especially remote emacs), but it will always buffer your keypresses and do them in the correct order.

Jupyter (for whatever reason), doesn't do this with the result that I ended up wanting to create a new code block, but that keypress got lost and then i end up ruining my original code block.

I 100% noticed the difference, and it was super frustrating (fortunately I left that job, and have managed to avoid Jupyter in the new gig).


I am using Spacemacs and have spent days trying to make it work faster (I am on macOS). Took a while and some effort but with a few strange tweaks I managed to make it more responsive.

Emacs/Spacemacs can still be weirdly slow sometimes but UI responsiveness is generally miles ahead of all Electron-based software still.

Which makes it even funnier. Emacs is decades old and still uses quite a few ancient techniques that are only hampering it. Even with that, it's still so much better in terms of speed! Funny.


Wait, what is the atom fiasco?


Atom (https://atom.io/) is another Electron-based text editor release by GitHub (before it was acquired by Microsoft). I think it predated VSCode. It certainly had more mindshare in the early days. But whereas VSCode has always been quite snappy, Atom acquired a reputation for poor performance.


> I think it predated VSCode

Yes, and no. They have a really interesting tale of convergent evolution.

Atom was the original Electron app (as pointed out Electron was even originally named "atom-shell"), so it predates VSCode as an Electron app. But the extremely performant "Monaco code editor" that VSCode was built on top of (that forms the heart of VSCode) was started at Microsoft years before to be a code editor in parts of the Azure Portal, and also it was the code editor in IE/Edge dev tools from as far back as IE 9 or 10 I think it was (up until the Chromium Edge). It wasn't packaged into an Electron app until after Atom, but it has an interesting heritage that predates Atom and was built for some of the same reasons that GitHub wanted to build Atom.

(ETA: Monaco's experience especially in IE Dev Tools and the wild west of minified JS dumps it had to work with from day one in that environment is where a lot of its performance came from that led VSCode to jumping Atom on performance out of the gate.)


Ah, gotcha! I only tried it out once after finding it on flathub, but never used it enough to notice it being slow. Interesting how that developed.

I'm guessing it's pretty much dead now that github is under the same company that also makes vscode, right?


Given GitHub's Code Spaces use VSCode rather than Atom, that writing is definitely on the wall, it seems. (Arguably the feature was built for Azure and then rehomed to GitHub where it seems to fit better, but still a stronger indicator brand-wise than most of the other comparative statistics in Atom versus VSCode commit histories and GitHub/Microsoft employee contributions there to, which also seem to indicate that Atom is in maintenance mode.)


Pretty much like that, I tried Atom once (when I found platform.io and wanted to have a look) and it was just wild how slow it felt. On the upside, it made using those crappy Eclipse forks MCU manufacturers release (like CCC, Dave, etc.) fell a lot less painful


> another Electron-based text editor

Well electron used to be called "atom shell" :)


Ah good point. I didn't know that.


I feel like fiasco might be overstating it a little, but basically atom is incredibly slow and this is probably the main reason that it never overtook Sublime and friends in the same way the VS Code did.


VS Code has a lot of native code and VS is particularly bloated. I'm not sure this is a good comparison.


VS Code has very little native code outside Electron itself.


Depends on which languages you work with. Many language servers are written in their own languages so it is possible to work with a lot of native code when using VS Code day to day even if most of VS Code itself isn't native code.

VS Code also used to have far more native code earlier on in its development life, but seems to be transitioning a lot of it to WASM (paralleling the Node ecosystem as a whole moving a lot of performance heavy stuff from native NAPI plugins to WASM boxes; as one example: the major source maps support library moved from JS native to Rust to WASM compiled from Rust, IIRC).


Native UIs could be much, much better. They've been a neglected backwater for 20 years.

Blame OS vendors for refusing to get together to specify a cross-platform standard API for UIs. We have mostly standard APIs for networking, file I/O, even 3D graphics, but not for putting a window on the screen and putting buttons on it.

OS vendors are still trying to play the lock-in game by forcing everyone to write GUI apps for only their platform. This is a non-starter, so everyone goes to Electron.

There are a few third party cross-platform UI libraries around. They suck. Qt is as bloated as HTML-based UIs, and then there's wxWidgets which is ugly and has an awful API based on 1990s MSC.

We could have something better, but it's an extremely large and difficult project and nobody will fund it. OS vendors won't because they don't want cross platform (even though all developers and users do). Nobody else will because nobody pays for dev tools or building blocks. The market has been educated to believe that stuff should all be free-as-in-beer.


> Qt is as bloated as HTML-based UIs,

Bullshit. Qt is much faster than Electron, the Mumble client is really fast on my Turion laptop, that with OpenBSD.

And I say this even if I prefer Barnard IRL.


Qt is smaller than Electron, but there are far less bloated HTML5 renderers than the whole giant blob that Electron ships. Compared to those Qt is similarly sized or larger.


Qt is still a program for a single purpose, so it has barely any unnecessary abstraction. Any html renderer will have plenty, because they are browsers first and foremost.


You don't need to use QML, and QT5 will be as usable.


The problem with vendor-made cross platform UI libraries are that they:

1) Would need to be lowest-common-denominator by nature

2) Would quickly stagnate due to friction against changes/additions

3) Would have few allowances for platform HIGs

If it were permissible to have vendor specific additions on top of a common core, that could probably work fine otherwise this hypothetical standard UI library would share many of the problems suffered by Qt, wxWidgets, etc.

The other option I could see working is something like SwiftUI, in which some control over the behavior, layout, and presentation is ceded to the platform — basically having developers provide a set of basic specifications rather than instructions for every pixel on-screen.


It's a complete stalemate. We can't force the OS vendors. The users don't like the status quo but have no choice.

As for the free aspect, I feel like this ship has sailed like 20 years ago. Nobody will pay for an UI toolkit these days. This is not Unreal Engine 4, you know. That stuff only works on AAA games market, apparently (although I am curious as to why it doesn't work everywhere else -- likely thin profit margins and/or middle management greed outside of the gaming genre).


IMHO a good cross platform UI toolkit is about as hard as a decent 3D game engine.

Crazy you say? Start making a list of the features a modern UI toolkit has to have to even be considered for serious projects.


I'm not disagreeing with you. It's just that today's mindset makes it impossible for people to pay for GUI toolkits alone, I think.


I don't think young JS devs know nothing else. There are still good programs out there, & you only need to experience it once

I get annoyed with Windows having the cursor randomly stutter for a split second rather than smooth motion. Or Teams taking half a second to load the conversation I clicked on. Or Powershell taking 3 seconds between initial render & giving me a damn prompt. Or the delay between me pressing the Windows button & the start menu appearing. None of these delays exist on my Linux machine where I've had the freedom to select the programs I use

I've made fast UIs with Javascript & React. Like all optimization it comes down to sitting down & profiling. Not taking "this is as fast it it can be" as an answer. In short, saying "Javascript is just slow" is part of the problem

Blaming languages is chasing a fad. I deal with it when people think the service I'm working on in Ruby is going to be slow because Ruby is slow. Nope, architectures are slow. If you know what you're doing Ruby will do just fine at doing nothing, which is really the trick behind speed


While what you say is fair, let me introduce an additional nuance:

Languages like JS and Ruby make it easier to write slower code (and harder to detect that you're doing it) by the virtue of how their ecosystem and culture turned out with time.

I stood behind the romantic statement of "you are holding it wrong" when I was younger but nowadays it seems to me that the languages live and die by the culture of their communities. It rarely if ever matters if the language itself can be better / faster.

So while I agree JS/Ruby might have undeserved reputation for being slow, I think you should also agree that they are easy targets because observably a lot of software written with them is in fact slow.

I am looking at it empirically / historically while you are postulating a theoretical construct. I don't disagree with you per se but prefer to work with the reality that's in front of me.

---

That being said, kudos for being the exception in the group of the JS devs! The web frontend industry needs much more people like yourself. Keep up the good work. <3


Your windows cursor shouldn't stutter unless you have io interupt problems, bad drivers etc.


I agree that the web is generally more bloated and slow than native apps. However, native apps don't magically become performant by being native.

As an example, my grandmother-in-law has been putting up with Microsoft Jigsaw's desktop app for years. Last time I watched her load it, we sat there for awhile and had to restart multiple times because it was getting stuck loading some advertisements. The startup time was absolutely brutal and the run-time performance while playing wasn't great either, even with a decent laptop.

So when I saw how slow, bloated and laggy this app was, I wanted to try to make her a better jigsaw app for the web and I think I succeeded [1]. It loads almost instantly, has no advertisements, and feels super smooth while playing... and it's mostly just js, svelte and a little bit of Rust WASM.

Anyway, I do prefer a good native app over a web app when available. But with native apps, it's also harder to block ads and other trackers compared to the web.

[1]: https://puzzlepanda.com


Sure, I'm not denying it. It's just that apparently it's very easy to produce slow-as-molasses UI with JS.

I've been working with the horrors called Windows MFC and Java Swing a long time ago. It was tough but if you did it right (moderately hard) you had a very snappy app on a computer that was 5x slower and had 10x less RAM than a midrange today's Android device.


You're exactly right! Building a slow web app is only one npm install away.

It takes someone who really cares about performance and monitors it to make a fast web app and to keep it that way. Unfortunately it's still too easy to accidentally make it slow.


Microsoft probably should revoke Arkadium's right to use their brand name. Arkadium's worst of the worst ads and microtransactions, much less their poor attention to performance detail, really are making Microsoft look bad to a lot of users that just want to play Solitaire/Minesweeper/Jigsaw sometimes.

Especially after the walkbacks that Xbox Game Studios had to do after flak about scummy microtransactions in Halo, Gears, and Forza, it still seems incredible that Microsoft continues to allow Arkadium to do it to a far bigger audience (and a lot of people's parents and grandparents especially) with their brand name attached to it.


I had a chance to play the demo round and it was extremely performant - well done. The only thing I'm not sure about is that on the first click of each piece it automatically orients itself to the final orientation as expected by the puzzle. Is this an "Easy / Medium / Hard" setting? Otherwise great!


Thanks for trying it out!

Yupp, it's on my todo list to give users the option on how difficult they want the rotation to be. So far I have users that want click-to-rotate and even no rotation at all.


Absolutely agree and I loathe the modern UI with passion, for the speed alone. I recently booted up a single-core 900 mhz desktop PC with Windows XP, and it was so fast to respond that it felt like it knew what I wanted even before I pressed the button. Inspiringly smooth man-machine synergy that is rare to come by these days. I'm an old man yelling at cloud.


And then you have the Apple II computers where the only bottleneck was the diskette drive speed. Stuff was just instant with almost everything you were doing.


I recently booted an old single-core PC with the latest version of Ubuntu. It ran like a glacier. Every single click took a minimum of 30 seconds to have effect.


I would say that those 'that don't notice certain slowdowns', sadly, may never have experienced anything but slowed down systems.


I mean there's now also an entire generation that has never seen the beautiful non-commercialized internet I miss so dearly.

Meanwhile, here I am, making a decentralized social media server and being afraid to add an extra <div> lest it bloats the page.


or an entire generation that will never realize how "doing nothing" or "being bored" is a good thing, or videogames don't require multiplayer or IAP to be fun.

I'm consider myself lucky to be born in the 'transitional period(1980s) I see the world of my parents and also have abilities to adapt with technology.


Long time ago I worked at a hospital and once had to go to a certain department to fix something on computer nurses were using and was horrified how slow the computer and everything was. So I asked around and ladies happily explained their daily morning routine: - turn on computer - do a morning checkup of all patients (around 20 minutes) - when they got back, computer usually finished starting Windows, if not, they waited another 10+ minutes for it to get ready - then they started Word (another 10 minutes) - and opened their main document with notes..or to be exact wanted to open the document. That took another 10 minutes

TL;DR - users can get used to pretty much anything because they don't know it could be so much better


They also don't have a choice.

My company, like many, bloats Windows with security software. We have the type of PC where McAfee uses 80% of resources for an hour every Monday morning. PCs with spinning hard drives take a good 15-20 minutes to fully boot, and some engineers still have those. Those who complain just get told to wait a few years for their planned laptop replacement, to finally get an SSD.

There's no solution, so users just cope.


I know right? Learned helplessness.



Anybody else remember the speedup loop.

https://thedailywtf.com/articles/The-Speedup-Loop

tl;dr : programmer inserts a large empty loop in a UI, so that in weeks when he achieves nothing, he removes a single zero from the end of the loop counter to speed up things a bit.


I would expect the compiler to get rid of that loop.


The story is from 1990. Nowadays you would probably have to be a little bit more clever. Maybe toss in a volatile variable?


In pretty sure 1990's compilers would do that.


Reminds me of an old job writing Windows desktop software. Our flagship app was big and bloated, and it had a long load time with a nice splash screen to distract the user.

We later created a light version for a specific use case, and the product owner came prepared with a nice splash screen for this one too. The app was so lightweight that it loaded near instantaneously - so the engineer added a six second delay just to meet the splash screen requirement.


> It looks "pretty" to the UI people.

Buys them time to get stuff done under the hood while you are gazing upon the 'sands of time' (good old Windows hourglass).

It conditions you/me/everyone to be impatient. I opt out of all such transition effects on my phone. I prefer that the screen goes black of freezes until the next screen comes up. This way I don't get distracted by irrelevant junk (spinning wheels, hourglasses, etc.). It is crunching bits. Don't add more junk to it. Let it crunch bits without tricking me.


While I agree with you, there’s a reason the current application environments are targeting web rendering engines, it’s cheaper for development. Why develop 3-4 different applications when you can develop 1 with hardly any extra effort?

Chromium is a huge boon to developers for this reason. Now there could have been a different history here. Apple after acquiring NeXT had also gotten OpenStep, https://en.m.wikipedia.org/wiki/OpenStep . OpenStep was a cross platform UI development kit, even the web could be a target. Apple decided (possibly for good reasons, hard to argue with success) to kill this off. But, they had toyed with it, https://www.macrumors.com/2007/06/14/yellow-box-seems-to-exi... . So, Apple had effectively what Chromium has become. A cross-platform development and runtime environment.

Would things be different today if that wasn’t killed off? Would Apple have never come back from the brink of death to become the behemoth it is today, because it would have starved its own platforms? One thing you might have had is a cross-platform “native” UI platform, and that might have meant faster more efficient UIs like you want now.

Shoutout to GNUStep trying to keep the dream alive: https://en.m.wikipedia.org/wiki/GNUstep

Follow up question: maybe with Apple being so successful, now they could revive this and make it profitable for themselves, rather than starving their own platforms?


The good news is this means if we make browsers & JS rendering faster, everything gets faster.

The bad news is that doesn't seem likely to happen.


I've been convinced for a while that the only sane way to develop any gui app (including web apps) is have game developers in charge. They know how to make stuff run fast, or at least interact snappily.


If you want buggy crap that's impossible to maintain filled with hacks to make something look like it works - game developers are the right choice. The requirements and practices in that industry are not comparable to standard app development and you would not want anything to do with that for app development.

People here crying about load times and FPS rendering are completely out of touch with reality of SW development - getting stuff to function correctly and reliably with requirements constantly changing > performance, and that's hard enough with tools that simplify SW development. Optimising for performance is a luxury very few can afford.


> getting stuff to function correctly and reliably

Hilariously, I wouldn't even say that modern software does that well either.


But that's my point - it's hard just getting it to work. Getting it to work fast is next level. Games are notorious for garbage tier SW engineering practices, bugs, ship and forget, and it's all about making something look like you'd expect it vs. making it correct - just completely different goals.


Half-life 2 is possibly one of the most impressive, in terms of combination of complexity, stability, flexibility and extensibility, pieces of software ever created. It spawned dozens of other games that all sold millions of copies and offered completely different but high-quality experiences. Sure, your typical AAA game isn't near this level of perfection, but your typical non-game software is hardly any better.


I like hl2 and all, but I doubt it would be even close in complexity to a web browser/OS kernel/good performing virtual runtime, like the JVM, compilers. There are insanely complex programs out there.


What you are describing is a false dilemma, believe it or not, it is possible to have both performance, maintainability and correctness.

To have performance, you have to understand the data you are working with and how it can transformed efficiently by your hardware. To have maintainability you have to create good abstraction around how you transform your data. To have correctness you have to implement and composite those data transformation in a meaningful way.

All of those things are orthogonal.


And to have all that with budget and time constraints >90% of SW development is faced with is unrealistic - so guess what - performance is the first tradeoff. Which is why people here lamenting on performance being like this holy grail feature are out of touch with the realities of SW development.


Budget and know-how is the limiting factor here. You can invest in all of the quality criteria. But is it sustainable business wise?

Game developers usually and rightfully skip maintainability and invest barely enough regarding correctness. Games are like circus performances while business apps should be made to run the circus.


I think this is actually something which Apple has done a fairly good job of. I remember even back in 2009 in the early iPhone days, the Cocoa API's were fairly well designed in terms of letting you create responsive, non-blocking UIs on hardware an order of magnitude slower than what we have today.

Game engineers are wizards, but real general-purpose UI is a different problem than they are generally solving. A game UI is typically very limited in terms of what types of information has to be displayed and how. Many applications have to support what is essentially arbitrary 2D content which has to be laid out dynamically at runtime, and this is something different than the problems most games have to solve.


> Many applications have to support what is essentially arbitrary 2D content which has to be laid out dynamically at runtime, and this is something different than the problems most games have to solve.

That sounds exactly like the problem most games have to solve. The age of fixed CPU speeds and screen resolutions is long gone. Games have to content with a plethora of dimensions along which to represent an interactive, dynamic, multimedia world.


I think OP meant it more in terms of layouting, like vbox can be inside a hbox which has also a text object, and every object can change sizes which will cause a recalculation in everything. It is surprisingly more expensive than the GPU accelerated rendering of many many triangles. Games are complex, but the dimensions question is trivial there.


Yeah, totally :D https://news.ycombinator.com/item?id=26296339

Most game developers will make it as fast as they have to... in fact most developers do that.

Games are usually developed as abandonware. Do you want your apps to be developed as abandonware?


I disagree: think of how long it takes to bring up the PipBoy in Fallout 3. Or to open a door in Mass Effect. The amount of times I've had my character just be running into the door for multiple seconds before it finally opens...


And then they just end up using a middleware like Coherent[0] which is back to HTML+CSS!

https://coherent-labs.com/



> This might be because I am a former semi-pro Quake3 player but these days I grind my teeth with 95% of all software.

Not really. I'm sure plenty of people remember the quick feel of early PC UIs. Ironically, q3 kind of came at the end of that era.

Some of the same people might even remember when, with a little training, voice recognition software could do its thing without an internet connection and a warehouse full of computers at the other end, on a PC with less RAM than the framebuffer of a modern PC or phone...


Totally feel your pain here. I think a lot of has to do with current JS tech- React, by design, trades off performance for developer efficiency.

I'm sensitive to latency.. first thing I do when I setup a new android phone is go into the developer settings and speed up all animations.

For our own company [0], we also treat speed as a top feature, though it's not something that's easily to market. It's something that power users appreciate. I even wrote a similar blog post [1] to this. The magic number, from what I've found, is 100ms. If you can respond to a user action in 100ms, it feels instant to the user.

0: https://www.enchant.com

1: https://www.enchant.com/speed-is-a-feature


Sounds like a good company to work in! :)

I would immediately apply but I'm not interested in Ruby or HTML/CSS anymore (although I still know the last two rather well and plan on making a return there to author my own blog theme).

Main focus are Elixir and Rust -- the latter exactly because I want to make efficient and ultra-fast software. Also very invested and averagely skilled in DevOps / sysadmin activities.

I hope there are more companies like yours out there -- and that yours is thriving!


One thing you can do if you're running Linux is to not run a compositing window manager. Use something old-school like fvwm, fluxbox, or WindowMaker. i3 is also good. When the X server draws directly to the display, it is FAST and there is not the delay of at least one frame, possibly several, that compositing WMs have. You run the risk of tearing, but I think most open source X video drivers let you turn tearing off.


Better to just get a 120hz+ monitor and lower the double buffering delay that way. Sharper clarity while scrolling and or tracking other motion with your eyes with that is worth it.


Back in the early '90s my dad and I used to say in a few decades we'd all have supercomputers on our desks. Now by those standards we do, and everything is still freakin slow. This is not the future we were dreaming about.


It's because we let the product people in.


They forced themselves in, mostly.


I’m pretty sure if you gave the devs creating slow ui on the web would create slow native apps too. I’ve created web apps that are on average faster than the desktop apps they replaced. I’m willing to bet nice simple fast programs are way cheaper to write.

Current situation is people creating abstraction at the wrong level and not understanding the performance cost of things like reflection and ORMs.


I mean, ideally there should be absolutely no bearing of usage of ORM on the snappiness of UIs. It should absolutely never block the Ui thread. At most it could add longer “in progress” screens or something, but that is a different topic (also, good ORMs when used correctly, that is the developer actually knows what he/she does and not only blindly copies code, than I doubt they would cause serious overhead. But I agree that non-correct usage can cause problems)


Yep. Hence my somewhat dismissive quip about "way too many young JS devs who know nothing else".

Kudos to you. We need more people like you.


> We need to get back to native UIs. Is it awfully hard? Yes it is. Do the users care? No, they don't. Many people want fast UIs.

I wouldn't say native UIs necessarily, IMO, but I definitely agree that something has to change.

Current systems are not only getting slower and less useful, but they're also getting harder to develop, test and maintain as well -- and, consequently, buggier.

The fact that there still are many old, TUI-based systems out there AND that users favor them over the newer ones exposes a lesson we've been insisting on overlooking.


You are correct, it doesn't matter how will the improvement happen as long as it does happen.

If Electron is rewritten in C / Zig / Rust / whatever and becomes much more lightweight then I'll be on board about using it myself.

But the abusive relationship between today's software and what is essentially supercomputers has to be ended and started anew on a more respectful foot.


The problem with electron is not the implementation - after all it is a bundled web browser and those are really really performant and written in C++. They pretty much make displaying a HTML document with complex CSS and running JavaScript as fast as possible (or at least close to it)

The problem is the abstraction level, instead of a locally running program manipulating objects that are turned into render instructions basically in a one-to-one fashion, there is a whole added indirection with generating and parsing HTML and then converting the dynamically created DOM into renderable elements.


Add that to the fact that it's reasonably simple to make a cross platform TUI, and I think you're on to something there. I'm ready to move forward to TUI's over the terrible GUI's we're all stuck with.


Indeed, and this "back to the TUI" I advocate isn't restricted to developer tools. I actually think of such replacement with end users in mind.

Maybe not necessarily something as radical as terminals, but anything providing the same programming ergonomics (in order to be easy to build and maintain) and constrained by the same restrictions (so that functional requirements get tamed).

At first, it would definitely sound as an involution, but I feel somehow confident that the market in general will accept such constraints as soon as the results become evident.


I agree completely. As long as displaying faithful image/video isn't a constraint, I don't see any reason why a TUI/similar would not be acceptable for any given task, after the user gets over the "text is scary" stage.


And even if your application needs to show graphics, you could easily do that on a separate, graphics-enabled pop-up window, while the forms, tables etc. would still be rendered by the TUI engine.


I'm doing that lately -- very gradually and slowly, but I'm doing it.

I've had enough of today's slow buggy messes that require gigabytes of memory and two CPU cores to show me a splash screen for 10 seconds.

A lot of the TUI apps I stumbled upon seem really well-done.


Out of curiosity, do you have a list? I'm always looking for good replacements.

I'm currently using nvlc and cmus for music playback, and then of course your standard complement of text editors etc. I like Lynx et al. for some web browsing, but compatibility is a pain.


I just started like 6 months ago but...

- `lazygit` is extremely valuable.

- `lnav` for inspecting log files has turned out to be surprisingly good.

- Do you use `fzf` in tandem with your shell so you can also search your command history (and not just look for files)? I use that for like a year now and can't live without it.

- `mc` for TUI file management has been moderately alright.

- How about `ripgrep`? Can't believe I lived without that one too.

- Rust's tool `skim` (the command is `sk`) in tandem with `ripgrep` or `the_silver_searcher` to very quickly search in file contents in big directories has saved me a ton of time already (although I moved to search file contents in projects in Emacs since). To be fair, you can just use `fzf` instead of `sk` here though; I am just positively biased towards Rust.

- `ripgrep_all` allows you to search in ZIP archives, PDF docs, Office docs/spreadsheets etc. Really useful.

- `ht` is a Rust rewrite of `httpie`, the Python friendlier `curl`. I like `ht` much more because it doesn't incur any startup overhead and started replacing my scraping / syncing scripts with `ht` where applicable which is NOT everywhere because `curl` is extremely powerful and it doesn't often make sense to replace it.

- Command-line or in-terminal charting/plotting: `jp`. I have made a CSV file out of all file sizes on my NAS (bucketed by powers of 2) and then invoked it on the input. Here's a sample CSV from a random directory:

0k,79

1k,6

2k,1

4k,166

8k,34

16k,7

32k,6

64k,3

128k,27

256k,2

512k,2

1M,3

2M,4

4M,8

8M,10

16M,135

Then do this:

`cat THIS_FILE.csv | jp -input csv -xy '[*][0,1]' -type bar -height 57`

And enjoy an in-terminal vertical bar charts. :)

- ...And I have a ton more.

But your question makes me sigh. I really have to start a blog. I am a very practical guy and people usually love my posts (scattered on different forums) where I make such lists. I should roll my own blog static web site generator in Rust I suppose, because the existing ones are either slow or don't support what I need... So, not going to happen in the next year, most likely. :(


I'll have to try some of those out. I've used fzf a little, but haven't really looked at it enough to get the full productivity gains. I've heard of rg ofc, but ripgrep_all has flown under my radar thus far, and actually sounds amazing, I've got a decently large library of pdf's I keep losing stuff in.

The rest I haven't looked at, but will have to add to my list, they fill a couple voids I've been feeling.

> I should roll my own blog static web site generator in Rust I suppose, because the existing ones are either slow or don't support what I need... So, not going to happen in the next year, most likely. :(

It isn't powerful enough to support what you need I'm sure, but I actually did something similar a little while ago.

http://a-shared-404.com/programs/

It's written in Rust, with dependencies on sh and markdown. I'm thinking about adding the ability to automatically execute an (optional) shell script in each directory, so that it would be easier to do things that markdown doesn't.

The code quality is atrocious (first Rust program of any size, and I'm not great at programming in the first place), but it may be useful. If you're interested in me adding that functionality, let me know, it may be the push I need to move it to the top of my pile.


The `sssss` program might have a potential. But let me give you an example: I want my every blog article to also have versions / revisions and I'd like visitors to be able to read the older versions as well as the last version.

I'd also like multilingual article ability (but I think some of the engines out there can do that). The more I think of it, the more I wonder if it should be something like Ghost.org: namely backed by sqlite and not naked files. But who knows.


Interesting. I haven't done much at all with DB's, so I can't speak as to whether or not that would be more effective.

That being said, I'd be quite interested in reading your blog, whenever you are able to get it going.



I know right, these days I just open browser with Gmail and youtube home page (not even watching a video) and do nothing - 10%+ CPU utilization (i7, 8 cores). Start serfing the web - laptop fans go into overdrive. It's almost like how much $ and how many cores needed to render bunch of text and images without lagging. And that's my home pc, it's blazingly fast compared to one in office with internal enterprise software installed on it.


What's absolutely mind-boggling to me is that moving my mouse consumes 10% of CPU. I fought tooth and nail to keep my PS2 ports, but everything's USB now. And apparently USB consumes 10% CPU to read mouse movements.


I'm curious about this so I open the task manager and just swing my mouse wildly. It did consume around 16% (from 4% at idle)

WAT.


There's something wrong with your PC because nobody will be able to reproduce this result.


Opened Chrome, opened gmail and youtube, nonstop 25% CPU usage, fans ramped to 4000+RPM, closed chrome, 3% with Firefox with some simple tabs. The culprit seems to be chrome's "software_reporter_tool.exe". It has chewed up 3 minutes of CPU and counting. It seems to have some well-multithreaded elements, it's added 12 seconds of CPU time in 1 second occasionally.


That's Chrome scanning your computer for malware like toolbars and such.


iOS has very fast and snappy transitions which are well suited for touch screens. And as pointed out in the article, it’s one of the few touch devices with <30ms input latency, so it can hardly be beaten by anything else.

It doesn’t feel right with no animation (like Reduced Motion in settings) since spatial hints are lost.


Websites do feel pretty slow, even when they're just a page of text. Caching helps a lot, but so does sending fewer bytes across the wire.

This can be hard to achieve if you work off templates and plugins.

Yet, I find it supremely important. I frequently lose my train of thought while waiting for pages to load.


I'm not debating whether it's hard or not. I've worked with GUI toolkits some 16-18 years ago. It wasn't a walk in the park indeed but you had the means to produce a very snappy and responsive app.

Can the same be said about Electron-based apps?

I'm too losing my train of thought sometimes waiting for pages/apps to load. It's embarrassing and I'm face-palming.


I'm strictly talking about text-based pages. It's not hard for us, skilled web developers, but it is hard for people who just want to get their content online.


I swear I can see the lag while typing into Slack and I feel like it is definitely getting worse the longer Slack has been running. What the hell is going on there? What are we doing wrong as a species to develop software like this?? Shit should be small, fast and simple.


Since we humans can, magically, make shit appear out of nothing by just writing gibberish and passing that gibberish to another program also written in gibberish, and get something that might be useful (probably not), I feel like we as a species are actually doing pretty well.

I agree overall though, most developers/managers of developers/companies who write software fucking suck at their job.


Agreed, Slack and Teams are particularly egregious examples.


> too much young JS devs who know nothing else

The problem is not the language or framework, is that very few people/devs/businesses actually care about performance anymore, so they just implement the quickest solution without even thinking about the performance impact.


I just realized that even minimizing HN comment threads has like 1s+ of delay.

Profiling: https://i.snipboard.io/UPhtmH.jpg


Agreed. I typically strip every feature possible out of my phone (and run in low power mode) and gravitate towards apps/products that just get out of my way.

When I built my blog, I tried to find every opportunity to reduce cruft (even stripping out extra CSS classes) so reading it would feel as close to a native app as possible.

You could argue that HN succeeded because it's focused on speed above all else.

(Also - fellow former Q1/Q3 player here, I competed in CPL, Quakecon, and a few other events).


> We need to get back to native UIs. Is it awfully hard? Yes it is

Not sure I agree with this. I wrote a bunch of data vis GUIs with PyQt and Pyqtgraph, all Python, with everything keyboard shortcuts and accelerators, and it was Vim-like speed except where CPU bound by data processing (NumPy).

So I think it can be fairly easy yet Qt dies (frequently, on HN) on the altar of native look/feel/platform (ie doesn’t look/feel like a MacOS app on macOS).


Not sure what you mean -- but I've never used Qt. I gather it's a controversial topic because I've heard exactly the opposite feedback than yours about it.

Still, I bet if more people used it then its community would have an incentive -- or a kick in the butt -- to quickly fix its deficiencies and it could become the de facto native GUI toolkit? Who knows.


Qt is really quick and is used at plenty of places. Places where (critical) software is needed for internal use will be written mostly as native apps. For example monitoring and the like. And these places often use Qt. I suggest you to try out the Telegram Desktop app (I think mac has a non-qt version as well so be aware). I really like using it for it’s speed as well.

My only gripe with qt is that one has to use C++ or python, other bindings are not


Oh, I am using the Telegram Lite desktop app. It's a breath of fresh air in the pile of slow Electron-based UIs. I absolutely love it and learned its keyboard shortcuts.


That's not fair to "young JS devs" who get inserted into a culture of product slowness.


I've been fired for taking the long road and preferring good craftsmanship and not idolizing shipping several times a day.

Sadly most people can't afford that and the results are visible everywhere in IT.


Ok then, everyone will just need to pay 3x as much for software. C and C++ will never return as mainstream UI languages for applications without extreme performance considerations because the cost of developing in such languages is too high. Before anyone gets their hopes up, I've written quite a bit of Rust and don't believe it changes this. Rust's type system is very difficult to teach and learn even compared to other complex type systems. Difficult to teach/learn = $$$. Even after writing a lot of Rust, I'm also still not very fast at writing it compared to my speed in other languages.

The only change we might see are more "native" UIs written in C#, Swift, etc. Also, Swift will not be a suitable replacement in its current form. Any replacement needs to at minimum work on MacOS plus Windows and by work I mean you can create a UI without crazy amounts of platform specific code.


There are ways to go still, I agree.

But I'd argue that's because nobody wants to invest money and effort.

As a fan of Rust (I'm regularly using it but I don't work for money with it currently) you are right: even if everyone agreed to move to it tonight, that wouldn't change things much because we have no cross-platform native UI toolkit.

Additionally, you might be surprised what prices people could pay for really good software. I personally will pay $500 for a lifetime license of a lightweight, fast, cross-platform and rock-solid office suite. But there's no such thing.


> "back in my day!..." but we have waaaaaaaaaaay too much young JS devs who know nothing else.

funny, java applet in the 90s gained a fame for being slow, being caused mostly by junior devs putting stuff on the UI thread


I remember those times. To be fair though, there wasn't much of anything else back then...


And cue the traditional response: "Developer time is much more expensive than computer time, so it doesn't make sense to spend any effort optimizing."

:-(


That might be true in isolation but repeat it enough times and it's actually much cheaper to have several highly paid programmers work on it for several years, compared to so much time and energy lost otherwise...


You can disable all motion on Android. One of the first things I do is disable all the animations. Everything responds instantly. It's great.


On iOS you can disable animations - it's buried in the Accessibility section but it totally works.


I don’t see the hate for animations when done correctly. I just tried it, and I really prefer the normal mode of operation. On Android I did disable animations because on an older version they are not done as smoothly as on the iPhone, but they can absolutely give many information on what actually happens.


I guess thats also the reason why old games in emulators not feel the same.


>I'd honestly pay extra for an iPhone where I can disable ALL motion, too.

I do that on my Android phone. Feels snappier than the iPhone now. Not perfect though. Scrolling sucks.

JavaScript is bad but nowadays that's not the main evil. That falls on the hideous awful libraries on top of JS that everyone seem to love these days. They need to die ASAP.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: