Hacker News new | past | comments | ask | show | jobs | submit login
Performance Matters (hillelwayne.com)
401 points by MattGrommes on Aug 19, 2019 | hide | past | favorite | 136 comments



Not sure where the author is from, but I'm not aware of any states that still allow paper PCRs. ePCRs are just a fact of life.

I pretty much never bother opening the laptop until I'm at the hospital. There are a lot of reasons for that, but a laggy UX isn't really one of them (despite the fact that the UX is indeed super laggy on any ePCR I've ever used). Instead I use a pen and paper (or a pen and a strip of white medical tape on my thigh, for a more critical patient).

The reason I don't bother is because the whole UI is just too much of a hassle. Not the application frontend, the whole thing. The laptop, the keyboard, etc. With a pen and paper I don't have to keep taking my gloves off and putting new ones on each time I need to switch from documenting something to caring for the patient (a pen is trivial to wipe down, or just toss if needed). With a pen and paper I can jot simple notes in a shorthand I'm familiar with, rather than having to navigate to the right page to enter the information I want to record at the moment.

I know very few providers that bother touching the laptop prior to arrival at the hospital. Generally we write notes on paper, then bring the laptop into the ER and start typing while we we're waiting for a bed for our patient (it's not uncommon that this takes long enough to get the whole chart written). At my agency we have two hours from when we transfer care to when the chart has to be signed off and locked.

I also chuckled at the suggestion that "0.1% of PCRs have errors that waste an hour of a doctor's time". In my experience 0.1% is pretty much how frequently the PCR gets reviewed at all in the ER, and anything important enough to spend an _hour_ on would have been mentioned up front during the transfer of care (nevermind the fact that an hour is an absurd amount of time for a doctor to be spending on a single patient).


My boyfriend is a nurse and said a very similar thing. He said he very rarely looks at a PCR as everything important is conveyed verbally when the handoff from EMT->Hospital.


I'm glad I omitted the ER nurse targeted snark in that case... ;)


According to the about section of his blog, he is in Chicago.

https://hillelwayne.com/about/


Wouldn’t a stylus/tablet interface provide the same benefits of a pen and paper solution for that requirement (well, you could wipe down the stylus, maybe chucking it is extreme)?


Not really. You still have to handle the tablet, and it's just more of a hassle to interact with (a piece of paper is a lot less fragile). A stylus will just get lost within the first day (we use laptops that in theory have a stylus, but I've never actually seen one... just an empty hole in the side of the laptop).


>a stylus will just get lost within the first day

Extremely relatable. Someone needs to come up with a stylus that is permanently attached to a laptop using a wire or some other sort of contraption, similar to writing pads with permanently attached pens. The slot (hole) for the stylus should still stay, so that it can be packed up/stored neatly.

The main reason it was not done yet, my guess, is because of the dangling cable from the tablet when the stylus is stored in the slot, but it could be retractable for storage or something like that.


Or just use a resistive digitizer above the screen and make anything with a tip a stylus, so they can be disposable and replaceable like pens.


I could've sworn that the Panasonic laptops that I've seen EMTs use here in Australia had exactly that. Though I've never been in the best shape when I'm in the back of an ambulance lol


Or make capacitive-capable styli disposable/replaceable like pens.


You can wipe down a tablet but not a pad. Isn't there some risk of transmission via the paper pad?

(Thank you for the insights!)


Yeah, a pad would be much harder to clean. We use single sheets of paper (and a metal clipboard that's easily cleaned if you want a writing surface).


Sounds like you are saying similar/same thing.


We're describing the same outcome. The author is describing a very different cause of that outcome.


When I was writing software for medical devices in the '90s, we had a very clear policies for dealing with cognitive drift that included performance ceilings, testing for drift in beta testing, etc. In our testing, 10 seconds was the absolute maximum time that a surgeon could "idle" and stay on task in surgery.

In addition, we found that including spinners, progress bars, etc. would not necessarily reduce the cognitive drift but would reduce emotional frustration.

The worst thing for frustration was UI elements being slow; buttons that responded slowly, scrolls that lag, pull-downs that didn't pull down. Exactly as this author notes. There was very little tolerance for those kinds of delays.

I presume that, like website response rate tolerance thresholds that have dropped from 5 seconds to less than 2 seconds now, the medical industry is probably using much lower times now.


>> The worst thing for frustration was UI elements being slow; buttons that responded slowly, scrolls that lag, pull-downs that didn't pull down. Exactly as this author notes. There was very little tolerance for those kinds of delays.

IMHO there is NO excuse for those kinds of delays. None. If you have those issues you're doing something terribly wrong.


I don't know the details, but many embedded interfaces/controllers don't have very fast processors/microcontrollers and UI is really expensive compared to everything else it has to do (take some inputs, and submit some data).

Think about how laggy the raspberry pi 2 or even 3 in the desktop interface. Sure they can be optimized etc, but now imagine what they wouldn've used 10+ years ago and how slow it would be.

EDIT: I have to agree with you guys, even given the tech they had they should've put a higher priority on responsiveness. I don't know what the status is today, but I know even the 2015 Prius's touch screen feels too unresponsive to my liking.


Respectfully, that is a common and tempting argument, but no, no, and no. The raspberry pi 3 has a freaking dual core 64-bit 1.4Ghz processor that is more than fast enough to run full-screen video games at 60Hz.

It's more than just an "optimization" that is making UIs on these plenty-fast processors so slow, it's a large scale failure of software engineering, building slow layers on top of slow layers.

The processor is not the reason for slowness, and we've had absolutely responsive UIs not just 10 years ago, but 50 years ago too. The actual reason is economic: cheap engineering. Using freely available components not well suited to embedded devices, writing layers using scripting languages because it's easy, treating the problem as solved when the functionality is there without regard to the speed.


On the other hand, the times were the CPU was polling keyboard resistances and then directly controlling your CRT's electron beam accordingly are long over. Today's systems are complicated, and we try to hide that complexity behind abstractions. All of those with buffers. Input buffers, OS message buffers, tripple image buffers, and so on. All of it adding time.

Yes, we won't again have the responsiveness we had 30 years ago. Because the tradeoffs are just not worth it. At the same time, stories like the article are a necessary reminder of there being worth in performance.


There's unnecessary complexity and abstractions -- especially in poorly designed systems, yes, that's part of my point, so I agree there.

> we won't again have the responsiveness we had 30 years ago.

Not sure I can agree with that. For the best systems, we absolutely have better responsiveness now than we've ever had, shorter latencies now in the hundreds sometimes thousands of hertz, and orders of magnitude more compute we can put in between input and response.

Maybe the worst systems are getting worse, but I think on the whole responsiveness has been monotonically improving since the invention of the microchip.

> the times were the CPU was polling keyboard resistances and then directly controlling your CRT's electron beam accordingly are long over.

I don't know every system ever made, but I don't think there was any long period of time when application software on digital computers controlled the electron beam in a CRT directly or polled keyboard resistances directly, those were abstracted by the hardware via DACs & buffers more or less from the start. Maybe some of the old vector display video games did, I'm not certain, but in any case, it's been abstracted that way for more than 30 years. The first framebuffers happened almost 70 years ago, before 1950.


The raw throughputs are definitely better but there is concern regarding latency even at the lowest levels, since today's CPUs have complex cache, power state and frequency throttling mechanisms. You cannot guarantee that something will perform with identical runtime in all expected use-cases unless you take care to use hardware that is optimized in that direction. And because the software environments are more complex a lot of capability is just dropped on the floor because the intermediate layers get in the way.

W/R to buffers in front of things, a decent number of the micro systems of the 70's were so memory starved that they would make these tradeoffs to retain video sync - that describes the Atari 2600("racing the beam" is the name of a book about the console), and Cinematronics vector games(if it did not complete drawing at 60hz, it reset). Most early arcade games did work with DACs(or rather ADCs) but ran their own calibration and button debounce code - and even with layers of abstraction that's still basically true today.

With graphics, the move towards desktop environments doing GPU compositing impacts graphics coders, since they now often have window manager buffers in front of them instead of direct access to a rectangle of pixels.

Web browsers are a more extreme example of this. Because the graphics model revolves around document presentation, things that aren't really documents, or are extremely customized documents, often get burdened by latency that they wouldn't have if it weren't for the browser.


Ah, I've been meaning to read Racing the Beam for years!

That's all true, and thanks for an insightful comment!

I would just add though that it's easy to frame things in a way that makes it seem harder than it really is to maintain responsiveness. For example, while yes caching makes guaranteeing exactly repeatable timing a problem, that's an issue at the nanosecond/microsecond level, and not really a problem at the millisecond (human perception) level at all. Today's hardware doesn't have issues maintaining 60hz unless the software isn't even trying.

Another counter-example would be that while yes, browsers and desktop UIs are doing compositing and don't have direct access to the pixel buffer, the compositing is actually done on the GPU via low-latency commands. Direct access to the pixel buffer would actually be much slower than what we have right now. The browser has no trouble responding and rendering at 60hz unless you do things that cause more than the ~16ms of compute you have time for. Triggering page reflows will do that, but compositing an animated WebGL canvas over something else on the page is plenty fast.


Check out https://danluu.com/input-lag/ , where the author measured latency between a keypress and the display of a character in a terminal for systems built 1977-2017.

For example, Lenovo Carbon X1 4th gen running Windows has about 5x the latency of an Apple 2e.


This is a great chart, and I've seen it before here on HN.

Just in case you were thinking this is perhaps a counter-example to what I said, I think it's the other way around, it very much supports my claim. The processors have (obviously) gotten faster, and yet for some applications latency has seemingly gone down. Why? Software and requirements are the reason, not hardware.

Terminals also are not an example of the "best systems" in terms of latency. You'd be much closer if you looked at video game latency, or tracked industrial applications that have low latency requirements from the start. The whole reason terminals have gotten slow is that they aren't trying to go fast.

Quite a few of today's most modern terminals have recently added performance as a feature, started rendering with the GPU, and quite predictably, they are restoring super fast response times to our terminals. iTerm2 would be an example of this.


You can make apps with complicated, animated UIs that respond in 16ms just fine with the hardware of an iPhone 4, which is way past the era of CRTs and direct keyboard polling. We know this because we made those apps 7 years ago.

Performance is always a business decision. Most people don't want to pay for it.


I admit not having good, hard data. My most vivid related memory is comparing the back then new flat screen to the my CRTs and being disappointed with both delay and sharpness (CRTs exponential decay is way nicer imo). Obviously wouldn't expect those to have gotten worse, though.

Anyway, you mentioned phones: 16ms from touch to response on screen? That's hard for me to believe and I'd love to read up on that, if you got any sources. The ones I've found were not that great but don't paint such a good picture. [0] is an article claiming touch sensors scan only once per frame, aka 16ms at the time. [1] claims ~85ms to response on an iPhone 4, ~55ms on iPhone 5. How were you able to easily beat those by such a margin?

[0] https://www.anandtech.com/show/9605/the-ios-9-review/9#Input...

[1] https://www.cnet.com/news/iphone-5-touchscreen-2x-faster-tha...


There are other parts of the program beyond touches, such as scroll performance and what you do in reaction to touch events. Also the user doesn't move their finger within 85ms anyway if you think about it. That is moving to tap the phone, smushing their finger into the phone and then moving their finger away from the phone to see the result.

Nowadays you have 120hz scrolling on the ipad and the very fast responsiveness of things like the apple pencil, which shows you that responsive hardware and software is definitely possible.


I have a distributed networked system on $35 off-the-shelf modules running Linux synced to under 100 microseconds measured deviation.

There is no good technical reason for slow UI elements in 2019.


> Respectfully, that is a common and tempting argument, but no, no, and no. The raspberry pi 3 has a freaking dual core 64-bit 1.4Ghz processor that is more than fast enough to run full-screen video games at 60Hz.

Windows XP on a Pentium 4 was snappier than Windows 10 is on an i7 today.


The Amiga 500 was pretty responsive, and its CPU power was epsilon compared to a Raspberry Pi or any modern embedded system.

It's just that modern software is garbage. That's all.


I agree. A few years ago, I powered up some old 166 MHz Windows 95 PC and was surprised how fast it was (certainly not on all tasks, but in general).


And Amiga had no memory protection which means that it would be very unsafe to use Internet with it. That said BeOS was quite responsive and had memory protection, but still inadequate security for modern security attacks.


The memory protection doesn't account for the performance hit. It's all the layers of shitty software and garbage devices on top.


Memory protection only adds a small performance hit that's true but it also requires a totally different way of working. I don't know much about Amiga but I remember that when reading about Synthesis (a very fast & low latency kernel https://en.wikipedia.org/wiki/Self-modifying_code#Synthesis ) thinking that it was only possible without memory protection..


In 1995-1997, I wrote and maintained an application that was used, during every surgery, in critical ways in every operating room in a large, regional hospital.

This app ran on a regular computer running normal Microsoft Windows 95. It wasn't terribly complicated, but it also wasn't simple. It accessed a database over the wired network for most everything it cared about doing.

It took some thought and care, but it wasn't impossibly difficult to ensure that the app was always highly responsive.

I think it comes down to priorities. I worked closely with the doctors who were actually using this app on a day to day basis, and working with an app that didn't slow them down was quite important for them.


Many systems have historically prioritized how nice they look at the expense of speed. First Mac OS X releases were notoriously sluggish for example.

I cringe every time I see a laggy but 'nice' looking interface. I'd rather see a wireframe with just text, it will still be more useful than a button I can never trust.


Given the clueless remarks I see that are invariably followed by, "I work on embedded systems," I suspect there's more bad engineering to blame than deficient hardware.


Don't think it really matters, if your device can display a dropdown, then it can display an empty dropdown and load entries within a couple seconds.


I use a Progressive Web App on a Samsung Galaxy S3 smartphone (1GB RAM) on a regular basis. If the phone just got a fresh ROM, rebooted within the last hour and the app is a fresh installation, I have no performance issues what-so-ever (~60 FPS animations, etc.).

But over time things degrade. I don't know why, but every year or so I have to format the internal SD card to keep the OS in shape. If I install too many apps which run in the background the RAM gets crowded and everything starts lagging.

And finally, the app itself is not as optimized as it should be, so that when I use it with the data I enter during a year or so, there are some steps which take a few seconds (rendering a list with >300 items/DOM nodes). By the way, a 'couple of seconds' is unacceptable from a UX perspective.

So this is just one example where you can have a 100% good experience but also a very degraded experience with the same hardware and software, due to different factors (from OS degradation to awful O(n) optimization).


many embedded interfaces/controllers don't have very fast processors/microcontrollers

I'm reminded of the discussion the other day about military procurement ignoring software. Prototype the UI first, then choose a processor that's fast enough.


Well, we used to have to write our own UI elements and swap them into the top 8 bits of the framebuffer reserved for the overlay.


I don't know how optimistic I'd be about the whole industry. I'd suspect medical devices are way more disciplined than EHR apps.

A friend's sister is a doctor. We happened to be visiting her hospital the week they were rolling out a new system. If if I recall rightly, her hospital was part of a multi-hospital company, and they were developing it in house. The developers were in some location distant to the hospital, and my doctor pal had never met anybody who worked on it.

From my perspective, the thing was a disaster. Not only was it laggy as hell, but the interface was dumbfoundingly bad. For a doctor to prescribe something for a patient, the would have to pick the medication name from an absurdly long drop-down. Then they could type in a quantity and pick a unit from a pulldown that had mcg, mg, g, and kg choices. Once that was entered, it would show up on the reports nurses would use to give medication. However, they didn't have the field sizing right on the reports, so 100 mg would turn into 10 mg. It was only nurse experience that kept people from getting dangerously wrong doses of their medication.

The whole thing was hot garbage of exactly the sort made in a big waterfall process by developers who never met the users and never saw them in action. I'm sure every bit of it was technically to spec, but I'd be amazed if nobody died from it.


It's the procurment process and the lack of skilled management. With the Ontario eHealth debacle[1], the people in charge were hospital administrators with no software project experience. They were taken advantage of by the contractors, their employees and, to be fair, their own greed. These kinds of large projects really need to spend the money on a real CTO and/or CPO.

[1] https://en.wikipedia.org/wiki/EHealth_Ontario#Consulting_and...


This is one of the reasons why you will often see software installation look linear [10min until it is finalized, 8min until is finalized...] even though on a micro-scale, the process may be oscillating a lot more.


I highly doubt most programs know when they'll finish, usually there's multiple stages of say installation, copying files, editing the registry, generating some other files, downloading updates, etc. Some of those stages take milliseconds, others can take minutes or longer. Maybe some take seconds on an NVME SSD but takes 4 minutes on an HDD (verifying files commonly has this behaviour), etc.

If the progress bar doesn't differentiate which stage it's in then it's purpose is mostly just to let the user know it's doing something.


I had an Epson scanner that had a progress bar while scanning. You could HEAR the scanner progress, and it wasn't close to even - sometimes it would stop for a while, or even back up. It didn't matter though, the progress bar kept an even pace and always finished at the exact instant the scan was done.

That has always stood out as one of the more impressive engineering feats I've seen.


People who quote "premature optimization is etc." never provide the full quote. The full quote has a significant degree of highly important nuance.

> "Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

"Premature optimization is the root of all evil" is incomplete without the surrounding context, and people unfortunately take it to mean "never worry about how efficient your code is," which is not at all what Knuth intended.


Basically it can be summed up to "don't optimize until you've identified the critical paths"


That’s oversimplifying it. Knuth was writing about micro-optimizations, like writing a section of code in assembly language to shave off some milliseconds from a critical section of code.

What many people today mean by optimization is a far cry from that.

Here’s a real-life example. Some code was super-slow and nobody could figure out why. It took only a minute to see the code was passing a string (in C++) by value, causing unnecessary copy construction and destruction (in a tight loop, to boot).

Writing that code by passing a const reference in the first place is Good Coding 101, but many people might consider that “premature optimization”.


Something is missing in this conversation. Do the people who are creating the applications that deal with medical data have motivation to make it better or more useful?

I've spoken with numerous people in my life who deal with these types of systems. Everyone complains. Fields for information are in unexpected places. There are too many things to click on. All of the people I've talked with agree on the same things.

It's as if no UX people were involved. The experience doesn't fit the need quite right.

What would motivate the makers of these things to improve on the situation? How are the contracts won? Do the people who have to use this stuff have a say in it? Do the hospitals and doctors offices measure stats to see what's working and what's not?

This space is ripe for disruption simply based on user experience.


I worked on EMS data entry software for a while (deepest apologies to any EMTs reading) and the big issue we ran into was needing to fulfill doctor/administrator pet projects to get a sale.

The county wants to collect data on if a drowning incident happened at a pool without a locking gate. Now all services need to update their software to include a random checkbox. An EMT was caught without gloves and now the parent hospital wants EMTs to record what levels of PPE they wear on every run. The customer might get a grant if they can help track infectious diseases so now there is an extra box where the EMT can guess if the patient has the flu.

In the abstract that is good data to want to know, but it ends up being more junk on the screen for the EMT to skip over while dealing with the patient or sitting in the hospital parking lot after a run.


If I had a dollar for every time I swore at NEMSIS, I could quit my day job...


NEMSIS is ugly, but it is vastly easier to work with than HL7. That was an evil XML standard.


Was? Has the US medical system finally moved on from HL7?


Newer versions of HL7 (and FHIR) aren't (exclusively) XML.


If it's anything like non-critical software, then I'd say the middle layer between developers and users. That is, the sales and management teams on both customer and producer sides. There are many reasons this middle layer might appear and cause problems - lack of trust in developers, need to "protect" company "secrets", software's main utility is checking some boxes on some contract and not actually being useful, etc. But almost everywhere I look this happens.


Like qudat said, the level of rules and regulations required to enter the market are indeed pretty high, but I don't think they are insurmountable. And industry connections certainly do help in getting contracts, but people will generally listen if you have good ideas.

The people making the purchasing decisions are almost always pretty removed from day-to-day usage of clinical software (and the bigger the company, the more true this is).

But how will you present to them that your new product is "better" than that of the incumbent's? You can say, "our product can do X feature 10x better than another product", but if they reply with "unless your product has dark mode for a select number of physicians who live in WA and NY state, it's going to be a no-go. We already have another quote from someone else who will do this for us", what are you going to do? You may resist at first, but in order to win the business, you too will likely repeat this process ad-nauseum, until your "MVP" becomes just a "VP".

These decision-makers are also cautious about "new" technology (and rightly so, for the most part). Since if something does go wrong with patient data, it is considered catastrophic - and who will be to blame then?


There's not much room for competition because the level of rules and regulations required to enter the market are very high. Basically the only companies that can manage that red-tape are massive corporations. The end result is a system like Epic that can do everything. These monoliths are usually large, difficult to change, and riddled with tech debt. Creating user stories for the thousands of different things that an administrator/physician/nurse/patient can do in these systems is seemingly an insurmountable task. I wouldn't want to try to rebuild a software system like Epic and I absolutely would not want to work on a system like that.

Furthermore, Epic is sold to administrators, not physicians. It checks all the boxes so it gets the contract.


I believe performance does matter and should factor into the design of a system. There is an ISO guidelines on software system requirements and specifications that has a section for performance requirements. It does matter even for banal, non-critical or non-life-threatening software systems.

Another often-overlooked requirement: environment. Does the human using your software system have to pay attention to more important situations around them? Is the environment they're using your software in noisy, stressful, dangerous or dirty? Should you expect frequent wireless interference or degraded wireless performance?

Reading the ISO specs are pretty boring, so is the SWEBOK guide, but it's rather interesting to think about software in terms of the whole system and not as an artifact unto itself.


You'll probably like Nancy Leveson's model and work if you haven't seen it:

http://sunnyday.mit.edu/accidents/safetyscience-single.pdf

http://sunnyday.mit.edu/


I’m with you, but good luck getting some scrum shop to include enough detail to achieve this.


As a former EMT and someone who built their own software to create run reports, I'm a bit skeptical as to that being the only reason.

Paper can be edited, even after the point at which you given a carbon copy to a hospital, if you're friendly enough. This probably doesn't happen often or at all but there is a psychological safety there.

People make small mistakes all the time on the ambulance in the rush to get them to the ER. We live in an extremely litigious society.


That's a good insight. Never dealt with any medical stuff, but I dealt with paperwork in the military as it was going through a (long painful) transition to electronic documents back in the '00s.

And I don't think it's just that society is litigous, but that databases actually (mostly?) work in terms of enforcing the rules you give them.

One anecdote of how things changed was that you used to be able to arrive on post, and if you didn't like your orders, talk to a sergeant major of another unit and quietly get your orders changed, which no doubt drove PERSCOM crazy.

When things were done with paper, there was a certain degree of flexibility in the rules that gave people some amount of local autonomy. When those rules are enforced by a computer, there's much less possibility of someone being able to override the rules when it makes sense.

Normal human social interaction tends to create unwritten rules that are simply more flexible and realistic than the rules we're willing or able to write down.


We have a platform that amongst other things automates filling in a fairly complex government form, including online signature.

We did a demo the other day to a large new customer. I learned that when they filled in the existing paper form, they didn't just sign it, they also stamped in the signature box with a little rubber stamp that "signed on behalf of the CEO".

It wasn't immediately clear whether anyone handling the form was looking for that stamp, nor what they did when they saw it. As a result, it's not clear to us at this stage whether our pdf generation now needs a new feature to allow a user-uploadable image file to be imprinted over the form.

That's the sort of thing you get with the joyous "flexibility" of paper.


So, interesting and all, but it ignores the main reason for the advice to build it first, then optimize for performance, which is that if you build everything for performance from the beginning, you end up with a lot of code that is optimizing for performance of something that isn't the bottleneck. In other words, performance that doesn't show up in the user's experience, because something else is the main delay.

Now, if all that meant was that the programmer has to do some more work, and you aren't worried about paying more for the software, this may not matter, but that is far from the only (or even most important) result of optimizing everything for performance. Instead, what you get is code that is much longer, and more complex, and therefore harder to update, and more likely to be buggy.

For example, one common thing you have to do to get performance, is to cache values in multiple places, instead of looking them up every time. This can result in big improvements to software performance if done in the places which are the current limiting step, but now you have to make sure you invalidate the cache correctly. In particular, if you don't, you get stale values in the cache, which is to say false data.

If you make a system with cacheing all over the place, it will be very hard to make changes correctly, which means sometimes they will get made incorrectly. In mission-critical systems, this is even less acceptable than elsewhere.

It's not just "I don't feel like optimizing". More often, it's the cost of optimizing is not worth the benefit, for this particular part of the code. How do you know what spots in the code it's worth it? You don't optimize, at first, and then see which two or three spots in the system are rate-limiting.


> Instead, what you get is code that is much longer, and more complex, and therefore harder to update, and more likely to be buggy.

I'm gonna repost a chart I made previously[0][1].

  Spectrum of performance:
  LO |---*-------*--------*------------*-------| HI
         ^       ^        ^            ^
         |       |        |            |_root of all evil if premature
         |       |        |_you should be here
         |       |_you can be here if you don't do stupid things
         |_you are here
Tricks like denormalizing your data model ("cache values in multiple places") start somewhere around the "you should be here" point. But in any typical program there's plenty of stuff to be done left of that point, and those things don't make your code more complex or longer. They only require care and willingness from the software vendor.

--

[0] - https://news.ycombinator.com/item?id=20389856

[1] - https://news.ycombinator.com/item?id=20520605


Usually, it's more like this:

  LO |---*------------------*----*--*-| HI
         ^                  ^    ^  ^
         |                  |    |  |_root of all evil if premature
         |                  |    |_you should be here
         |                  |_you can be here if you don't do stupid things
         |_you are here


>> For example, one common thing you have to do to get performance, is to cache values in multiple places, instead of looking them up every time.

This made me cringe. This is filling out a form. It could conceivably be a single record (or small set) in memory that gets committed/updated somewhere else in the background. Overly complicated solutions for simple problems are one of the primary things that kills performance.

You don't have to optimize everything for performance, you just have to realize it's a primary feature and avoid doing anything that kills it. We carry super computers in our pockets, you don't have to write complex game code to get performance these days, just don't write shitty things.


I had an overly long conversation where someone was hand wringing about how and when we pull data to populate the list of states and territories of the US (the fact that the address had to be inside the US had legal standing).

I insisted, and probably still do, that since this list hasn’t changed since World War II, odds were really good that the server would reboot before the next change was made, and so looking it up at startup should be just fine.


The point here is not to optimize prematurely, the point is to make responsiveness a design requirement, because it is. You haven't even built it in the first place if it takes more that 100ms to provide any visible feedback to a user input, and you can't even start optimizing until you're already meeting requirements. Once you meet requirements, then by all means pick and choose your optimization battles carefully.

To me it seems like Knuth's comment is taken out of context more often than not. He wasn't trying to give people a free pass to make things as slow as possible and ignore system performance. He was assuming that you're making reasonable choices, and not doing overtly stupid things like running the inner loop of your screen drawing in Python, just because it's easy. People always seem to forget the 2nd sentence of the premature optimization quote too: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."


Sure, but to know that the critical 3% is, you typically need to first build it, and then measure it. Only in rare cases is it so blindingly obvious that the part you are working on will be in the critical 3%; nearly always, you need to actually measure first. To be able to measure, you must first build.


The key component you're missing here is the role of performance in overall design and architecture.

If responsiveness is a goal, as it should be in any GUI but especially must be in a system intended for use in medical emergencies, there are high-level decisions that can be made to keep things in the right ballpark. One of these decisions may be "the interface should never have to wait for a network response", for example.

The process of micro-optimizing specific segments of program code still needs to depend on actual benchmarks and measurements, but the value of reasonable and experienced system architecture that considers the actual requirements of the problem domain cannot be overstated. You're sunk from the get-go if you make the wrong tradeoffs here.

Unfortunately, the industry is mostly populated by clowns who are just blistering to get the hottest buzzword listed on their resume, very frequently rising up to the point of blatant professional misconduct. This leads to many truly absurdist design decisions that sink projects long before the first chunk of code goes into production.


Maybe the missing key component is the role of architecture in providing a platform for performance ;)


I can agree with everything you said, but I don’t think there’s really the paradox here or chicken-egg problem that you’re implying. It’s not difficult to make design decisions before building based on the specs of your components. It’s not hard to know that filling pixels via a scripting language or waiting on a synchronous internet request is going to hose your responsiveness. It is possible to make reasonable design decisions before you build, and to reach responsiveness before you optimize. The main reason people fail to reach responsiveness is for lack of making responsiveness an up-front requirement, it’s not for lack of optimization.


Most performance problems come from incidental complexity and programming techniques which don't take into account what the actual hardware is doing. Your focus should not be on tactical optimizations like caching but rather on expanding your understanding of the system as a whole in order to devise simpler and more efficient solutions to problems. Look into "data-oriented design" for more info on this.

In the modern business environment, time is most often not allocated for optimization after functionality is complete. And even when it is, if performance is completely ignored throughout the development process, your options for improving it at the end are severely limited.


This is a non realistic reply, as if optimisation is something that can be tacked on after. The truth is you have to walk the fine line between performance and maintenance. It's good to plan ahead and be able to optimise on the bigger picture early, so the design can go through several iterations and code reviews and therefore be trustworthy at the end.

Your example of caching is almost basic programming, maintainable code also caches values after calculation for use elsewhere. You don't not cache the results of a database query and pass it around, that's just spaghetti.

In reality you have to break out your design into components that can later be rewritten one at a time in order to eek out performance. Easier said then done, but at a high level this first means keeping your big areas (UI, database queries, File IO, networking) separate, and work out the infrastructure from there.


eke


This isn't premature optimization however. It's having some reasonable SLOs that must be measured throughout development and met. To say that the average button response has to be within 20ms, and the 99% slowest within 50ms is pretty reasonable, and can be measured reasonably easily in a test suite.


Sure. But that implies you make it (without everything optimized for performance), measure it, and then make the things which are slow enough to matter, faster. Which sounds just like avoiding premature optimization.


But if you ignore the performance aspects of architecture and build, by the time you try to meet performance specs it is likely too late.


This comment is

1. True.

2. The primary reason that most software is painful to use and occupies significantly more resources than strictly necessary.

Programmers, like everyone else, focus on their own pain spots rather than the issues of others. If you don't at least think about optimizing at first, you won't ever do it.


Software like this doesn't get written in someone's garage. There was almost certainly a team behind it, including a dedicated product manager. The product manager made a decision not to prioritize performance.

On one hand, it seems a clear failure. On the other hand, it's hard to judge these kinds of decisions from the outside. The product seems to be successful; the customer paid for it and installed it in ambulances. Maybe if they'd focused on performance instead of adding feature XYZ, or increased the price by adding a faster CPU, the customer would have bought something else. Your competition checks more boxes, therefore must be better!

Mostly when I see shitty software, I think not "who wrote this?" but "who bought this?"


> So, interesting and all, but it ignores the main reason for the advice to build it first, then optimize for performance, which is that if you build everything for performance from the beginning, you end up with a lot of code that is optimizing for performance of something that isn't the bottleneck. In other words, performance that doesn't show up in the user's experience, because something else is the main delay.

This isn't a question of optimizing something that doesn't need optimizing. It's about making fundamental choices from the beginning that are optimized and won't need to be optimized later.

It's the choice between using native code v.s. electron to build your app. You can make the argument that electron might be better for various reasons, but if you choose to go native, that's an optimization step you are taking.

Basically, you are saying "Premature optimization is the root of all evil" without including the rest of the quote and misreading what Knuth wrote.

> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

There are certain fundamental things you can do, optimization you can do before you ever write the software. This comes from experience.

> Instead, what you get is code that is much longer, and more complex, and therefore harder to update, and more likely to be buggy.

No, this does not have to happen. In fact, if you build it first and worry about optimization later, you're more likely to end up in this state.

> How do you know what spots in the code it's worth it?

Experience. You get that from experience with code. And you can't get that experience if you are frequently changing jobs.


Thank you for writing this. So many people don’t get it. You cannot tack performance on after the fact like some aftermarket pony car part if the performant architecture is not there to support it.


This is too narrow a focus. Performance starts at the architecture level and can’t be bolted on after the fact.


Welcome to the world of soft realtime applications. Sounds like this one is just so bad that a) the designers didn't realize that was what they were designing and b) consequently it failed to meet any of its deadlines in the UI domain.

One detail I've noticed is that even if you do a shitty job of hitting the deadlines, users will often find workarounds. As long as a critical mass of features respond consistently within the deadlines, users will fiddle around to make the rest work-- even if you tell them directly that it isn't possible.

Say the dropdown menus here performed fast enough, but going to the "next page" required a bunch of DOM mutation or something that grinds to a halt for two seconds. I'd bet EMTs would have trained themselves to do branch prediction-- hit the button to change pages before setting it down, hit it again based on what the patient is communicating to the other EMT, and so forth. I've seen users do weirder stuff.

The fact that they discarded it altogether tells you how bad the UI was.


I use to use Amiga computers, I presently use Haiku - i am not trying to say those are the best OSes. But when I use Windows I am surprise how slow some functions are.

Worse, I can use two different programs in Windows and one will be dog-slow compared to the other.

I blame the use of pre-written libraries that in turn call more libraries that in turn call still more. There was a site on Windows bloat-ware, there were for example programs having multiple version of the same library link into the code even while the program only used one library ever.

Also the screen layout is runned thru a layout program/library that slows things down, the layout system is great when you are designing the system, but once you reach the point of final and fixed design there are faster ways to display it.

on the other hand, management is too often cheap, and says "It works, don't it?" and do not want to pay for the extra coding that would make so much faster.


It isn't just management. Most software ecosystems (Rust, C, c++ excluded) discourage any thinking about or tinkering with performance knowledge. You see evidence of this on stackoverflow questions, questions on reddit and twitter. Young people or new programmers curious about the fastest way to do things are always lectured about how this is not good to do. It is quite unfortunate because building some experience for how to do things fast is very useful. It isn't always correct to optimize the hell out of things, of course, but we should encourage the curiosity behind it, so that more of us have the tools to do it when it is correct to do so.


I have often found that doing things efficiently isn't really harder than doing them slow. So why not aim for software that's reasonably performant from the start? Sure you can micro-optimize to get every last nanosecond, but that's not what I'm talking about.


The Windows vs Haiku case you describe is mainly a difference in terms of development culture. In Windows you can get similar fast performance and responsiveness by using the native GUI functionality, but most desktop applications nowadays use toolkits like Qt (at best) or Electron (at worst) that reimplement a ton of functionality that the OS itself already provides and treat it as something different than what it really is.

On the other hand in Haiku everything uses the native Haiku APIs.

But from what i've seen ever since Qt and Java were ported to Haiku, a bunch of applications started relying on those to run on it instead of using the native APIs (which IMO defeats the entire point of Haiku).

Haiku does use a lot of prewritten libraries BTW, they're just hidden behind the scenes and shared among applications instead of each one bringing their own copy.


At least at my work, the issue is twofold.

First, nobody measures performance. If they do, it is often done poorly (without using performance tools)

Second, when performance is a problem, everyone guesses on the cause. Remember how they don't measure? Well, they also will say things like "this is probably slow because of X" where X is the database, the network, a method they hate, etc.

Performance analysis requires accurate measurement.


this sounds terrifying.

thankfully at my work nothing goes into production without a performance eval measured at least by Lighthouse & webpagetest on Fast 3G/2015 mobile/chrome.

it's nice to be able to dictate these requirements ;)


Amiga at least, older computers were more responsive because they did less - directly attached video and keyboard hardware, no USB, no compositing video drivers, less font smoothing, and so on.

See Dan Luu's analysis here: https://danluu.com/input-lag/

and here https://danluu.com/keyboard-latency/

> Moreover, a modern computer with one of the slower keyboards attached can’t possibly be as responsive as a quick machine from the 70s or 80s because the keyboard alone is slower than the entire response pipeline of some older computers.

Haiku (BeOS) was always promoted on its native multithreading and responsive UI, with early demonstrations showing windows being dragged around while playing video, IIRC, it's impressive that it still holds to that.

But it also says a bit about what software people pay for and keep using - if sluggish Windows software has thrived, and BeOS crashed and burned. And then on Windows and Linux, light fast software gives way to bloaty huge sluggish software left and right.


The “leaning tower of abstractions” problem absolutely should be receiving more attention than it has been, because otherwise it’s only going to get worse. I don’t know what a fix looks like exactly, but something has to change.


Almost all of this stuff is caused by doing blocking IO on the UI thread. Unfortunately that becomes harder and harder to avoid doing by accident. And if part of the UI code or data gets paged out, there's a colossal hit to responsiveness.


My philosophical rule of thumb is that it is morally wrong for humans to ever wait on computers, period. It's an unattainable standard but results in much better software IMO.


> Something like a quarter-second lag when you opened a dropdown or clicked a button

So the title would more accurately be "UI responsiveness matters"? I doubt anyone would think this kind of lag doesn't matter. It sounds like the software was designed (and tested) for better hardware than it is running on, or at least different hardware. But that's just a guess.


Genuinely curious, what is the difference between performance and responsiveness in this context, and why would it be better as "responsiveness"?


You can make a UI feel responsive even if the underlying system is not performant.

Let's say you have a client app with checkbox that toggles some boolean variable that communicates with a server. If you're on a 3g connection in an ambulance, it might take a few seconds to send that request to the server and get a response back.

An unresponsive UI would sit there and make you wait for the request/response to complete. You click the checkbox, then nothing happens, then suddenly, some time later, the checkbox is checked and the page refreshes, or a state change happens. It's jarring and weird. A responsive UI would update the state of the app based on the user's actions even if the request hasn't completed yet.

Tightly related to this the concept of optimistic UIs - the UI acts optimistically and updates the state of the app under the assumption that server-backed state changes will work. The iPhone's messages app sending messages is the prototypical optimistic UI interaction. You send a message, the blue / green speech bubble shows up on your messages app while the message is in flight. If it succeeds, nothing changes and you're none the wiser. If it fails, you get an option to delete the message or resend.


Some (subtle) confirmation is nice, too, when the user is actively aware that the process is fallible. The Messages example you chose has this: a small gray "Delivered" tag is added below the message when the sending device hears back from the server.


Ever used one of those product search forms where you click that you want to see products with attribute A, a check appears, you go on to click on attribute B and a check appears there, and then the product list updates showing attribute A with a complete loss of B?

A responsive UI is necessary, but it's not sufficient to make up for a slow back end.


My take is that responsiveness has a cap of speed that’s pinned to human perception. Once you get here you can stop worrying about needing more.

Performance does work as fast as possible regardless. You always want more but you’re capped by the system, hardware, and algorithms and not human perception.

Feels like I’m splitting hairs though


No, that's a very good split :-)

Even when responsiveness is great, you can still work on performance to improve: resources utilisation, energy usage, jitter of responsiveness, etc. Users may not notice, but the spending account may do. This distinction is more relevant with multi-user or time sharing systems, less so with single-user game consoles. (If you're hitting the right framerate at the most demanding scenes, why bother?)


If I click a button and it instantly shows me a dialog with a progress bar, and the progress bar progresses smoothly for one hour to complete the task, that is responsive (the UI responds to my click and the background update as quick as anyone could desire).

That the task takes an hour to complete may or may not indicate a problem with performance.

The fact that the task shows a progress bar at all is likely because at some point the task would complete synchronously and without a progress bar in say 10 seconds or 60 minutes. In that case, the programmer mistake was to allow the problematic performance of the program affect the responsiveness of the program.

So a progress bar is used when the performance can't be easily fixed but the program must be responsive.


Responsiveness is about latency and feedback. Performance could mean that or it could mean throughput. If I'm using an interface I want responsive feedback without latency above everything else.

I know it's easy to dump on electron, so I will. Multiple electron apps, especially the ethereum client, are the worst programs I have ever used, because there is enormous lag on a computer that runs everything else flawlessly. It's so bad it can't even be all blamed on electron, but more than bloat or memory use, interactivity is what I consider the most important factor in an interface. I even put it above great design or intuitiveness. I would take blender's ridiculous interface choices over something better that lags like the ethereum client any day of the week.


Because that's literally the complaint as given. If the UI had reacted "instantly" it would be more user friendly. Even if the performance of the (non UI) code was slower.

EDIT: Or to put it another way: "perceived performance can be more important than actual performance".


Responsiveness is a result of adequate performance. The dropdowns that the article refers to are probably populated by a database, as opposed to being static. It doesn't matter if the dropdown opens instantly, what matters is that the information it contains is available.


Performance is about how it works, responsiveness is all about how it feels.


Did that quarter-second lag kill anyone? Was there someone who wouldn’t have died if the ePCR was just a little bit faster, fast enough to be usable? And the people who built it: did they ask the same questions? Did they say “premature optimization is bad” and not think about performance until it was too late? Did they decide another feature for the client was more important than making the existing features faster? Did they even think about performance at all?

This seems like a lot of imagining of scenarios that may or may not have happened. Perhaps another way of looking at this is - did the company hired to build this actually care whether EMTs used their software, or were they looking to get a paycheck?

So many health tech firms fall in the latter bucket. Such is the reality of the business - with huge enterprise sales funnels and hospital networks that don't understand the value of UX & great product (and caring a lot about their bottom line), this result isn't unexpected.


If someone is sick enough that a "quarter second" matters, there's zero chance the tablet is out in the first place. In reality if a quarter second is enough to kill someone, they're gonna die anyway...


I feel the same about my EKG. What's a quarter second when it comes to measuring a heart? Who cares about precision.


A quarter of a second in measurement resolution of an EKG is huge. An extra quarter of a second interpreting or documenting that EKG couldn't be any more meaningless.


This sounds like a clear case of consultant software. The people building it won't be dogfooding the result - at most they'll be clicking around the UI on their powerful desktop machine and patting themselves on the back because it responds in less than half a second. The requirements will be vague, because getting performance requirements anywhere beyond stupidly vague "performant on modern hardware" or some shit requires real expertise in UI design and will cost real money to build, and UI designers won't be involved in the details until the contract is signed.

"Premature optimization" in reality is a vacuous phrase, because premature anything is bad - it's right there in the word! (Insert joke here.) The problem is that most developers (myself included in many cases) are just not qualified to say when it is premature to optimize, because the requirements do not state anything meaningful about performance.

If you think this is overly pessimistic, I would encourage running pretty much any Android app or opening any major website on a 2018 or older smartphone. It's pretty obvious developers don't know how to build software for the probably 50% or more of customers who don't buy a new top-of-the-line phone every year.

As for performance guidelines, have a look at Jakob Nielsen's amazing evidence-backed UX guidelines such as Response Times: The 3 Important Limits[0]. The ones which should stick are that 0.1s feels instantaneous (probably not when using a scroll bar, but that's another matter) and 1s is the limit for not interrupting the user's flow of thought. In other words, if anything your program does takes more than 0.1s to respond to user input on the target hardware that should at least be acknowledged and prioritized (maybe at the end of the backlog, but at least then it's a known issue).

[0] https://www.nngroup.com/articles/response-times-3-important-...


I have a theory on the dynamic being described, based on my experiences as a software developer.

When you are juggling and prioritizing a list of tasks, they are occupying short term memory. If you can perform the current task from muscle memory, you don’t upset working memory. But as soon as you start having to think about it, and worse once you start editorializing in your head, things start to drop off.

Worst case results of this phenomenon have occurred when you finish a hard task and you have to ask yourself, “what was I doing?/why was I doing this?”

So you want your tools to be unobtrusive. People who love hand tools know this. Somehow we do not.


Indeed.

"If I have to pay nearly as much or more attention to the use of this software than it saves me, it is by definition worthless."

Intellisense in VS/Resharper went through a nice period a few years back where it was smart enough to match what was meant, fast enough that it didn't (usually) drop keypresses, and (crucially) stupid enough that typing the same thing twice would get the same results. Sadly, all of the above have been lost in the hail of features...

Context sensitivity is not always a good thing in a tool. Muscle memory wins.


Sounds like this PCR software should be structured around letting the operator enter information almost as freeform as possible, basically collecting timestamped notes that are reviewed and formalized afterward.

UI up front can be very fast capture, with asynch layer to parse and formalize constructs, with a third stage of prefilling the final form (if paper layout needs to be maintained etc) and allowing very flexible revision.

Last step would be archiving locally and then transmitting to whatever outside system.


It'd be even better if, say, your insurance card had your basic medical info stored on it in some encrypted form, and then the EMTs could just scan the card and have half the form populated with accurate information automatically. Or if it just stored some kind of UID that the PCR system could use to pull that information from the insurance company's database.


I saw this myself in an ambulance in the Boston area. The PCR system they were using was on some Toughbook-style computer, and I could tell that it was lagging and slow to input basic stuff on. Apparently whoever had put it together cared more about durability (important) than performance. Or maybe it worked great when it was deployed, but the newer software updates weren't built for such a slow machine. They used it; but it was like inputting a data on a slow ATM.


Or maybe they made some stupid decisions like making each update query something over the Internet, which works fine when testing, but not when you're on the streets, in close proximity to a lot of other cellular Internet users.


I'm can't dispute that fast software is a desirable trait for users. I would love to spec low latencies into the requirements documents for my GUI projects. But from a business standpoint, UI latency requirements are a good way to sabotage a project.

Slow solutions have a dramatic business advantage: using libraries, you can shave months off your delivery schedule by adding miliseconds to your UI delays. The months add up; so do the miliseconds. By using someone else's crufty libraries, you can write a calculator app over the weekend. Libraries will handle the double buffering for screen painting, unpacking fonts and rendering them into bitmaps, how to reflow text onto a screen, etc. It might take seconds for the app to load all it's assets and display the first user activity, but it was cheap to create, easy to iterate, and it gets the job done. These are hallmarks of a valuable startup engineering effort.

But the development process for speedy applications is anathema to businesses of any size. You have to throw away Electron and hire systems engineers with knowledge of the entire platform. After some profiling, they may determine that the required speed is only possible once the system event handler has been replaced with a pared down routine, or when the framework's audio interface is bypassed so the sound card can be accessed through low-latency DMA mode. You have to repeat this for every platform you intend to support. You have to throw away the wheel that everyone else is using and pay to reinvent one that spins a little faster.

To paraphrase Joel Spolsky's comment on rewriting software: are you sure you want to give that kind of a gift to your competitor? For most companies it isn't worth it.

Are there exceptions? Yes: a motivated engineer like John Carmack may work months to eliminate another 3ms frame latency out of a VR headset simply because of personal passion. Google can amortize additional billions by wringing 100ms out of page load times by funding a browser project so complex that you also have to design a new build system for it. But if you're not Carmack and you're not Google, you probably can't afford to reinvent the wheel.


I don't agree that it's THAT hard.

I used to work with WebForms and Cocoa (before iPhone) back when enterprise was completely allergic to web. Not only those tools were faster, but the development process itself was quicker and less traumatic than it is today. Interfaces were crude, but snappier. All grey, and users preferred using the return key to move between fields. And we could knock a calculator app in a few hours.

The landscape was much more diverse too, a lot of women, and older folks (40/50+) coming from Fox Pro, Clipper or even COBOL (mostly on WebForms). I don't know what happened to those people when everyone moved to web tech. A lot of people my age moved to backend because they find HTML/CSS/JS too hard. Cocoa guys moved to iOS. I'm on frontend + iOS now.

For the enterprise shops around here I know it all went to hell when both business AND developers started pushing modern multi-platform development: Qt, Java, Adobe, Cordova, you name it. Suddenly apps became crappier and a lot of people got pushed away from development because it was becoming too hard for them. After that everything became browser based. Mobile was weird for a while (too much Cordova), but it later settled on React Native.

I know WebForms (and Interface Builder!) might look primitive and limited to today's developers, and I might be looking trough rose-colored glasses, but I remember when it was enough to build 90% of the apps people need, you don't need John Carmack or dealing with the audio buffer directly. Since this tech still exists, I can't see why that's not the case anymore :/


John Carmack doesn't work hard to eliminate 3ms of frame latency simply because of personal passion. In VR applications, latency kills and will make people sick. Those hard fought ms are needed. You only have 11 ms per frame at 90 Hz.

As someone who works in the performance arena myself I feel like a lot of the work I do is undoing the mistakes that the attitude that it's OK to not care about performance so I'll just do it the 'easy' way. It's pretty much a form of technical debt that is somehow culturally acceptable. For years we've had Moore's law covering everyone's asses but perhaps there will be a reckoning yet.


I don't mean to belittle performance work, but the value of Carmack's successful optimizations in the VR industry doesn't translate to most business.

Let's say we have a $1M/year business opportunity with a requirement that a user be able to take 1MB photos on a phone and have them appear on a PC/laptop which are both connected to a 802.11G access point and on the same internet-connected network segment. The time requirement on this phone->pc transfer will dramatically affect the software development effort in terms of time, money, and skills.

With a requirement spec of 1 photo every 10 seconds, commodity software can already accomplish this. With a cloud account from their smartphone vendor, a user can monitor their photo album from their PC browser, and photos taken on their phone will be sync'd to the cloud, then visible on the laptop. Assuming both devices are on wifi with a typical cable modem connection, the transfer of the photo from phone->cloud->PC-browser would occur within 10 seconds. No software engineering required, supports MAC/Windows/Linux and iOS/Android out of the box.

Now let's spec a 1 second transfer time. Cloud syncing is out; we'll need the phone and the PC to communicate directly over the wifi link; we'll have to write a phone app that sends the photo file over a socket, and a PC app to receive it and display it on screen. 1MB will take 500ms across a healthy 802.11G link, so we have 500ms leftover to allow the phone's camera app to store the photo as a file, for our app to establish a TCP connection, for our PC app to accept the incoming file, and display it in an ImageBox control. We can still meet the 1 second spec on a poor wifi connection... Wait, how does the phone know the address of the PC it's going to send the picture to? We'll need to write our own discovery service to allow the phone to learn the IP of the PC... Should we do layer 2 , or cloud assisted? Or maybe we'll have the phone scan a QR code on the PC in order to link the phone to the PC... We've only imposed a 1-frame-per-second requirement, but we're going to require a software team familiar with PC app development, phone app development, layer 2 and layer 3 network protocols. You could hard code the IP addresses for a quick demo, but you're looking at a small team coding in several languages for several months just to support a single platform (say, Windows PC with Android phone.) Additional platforms will require a repeated effort.

But Carmack would never be happy with 1FPS. What if we spec a 40ms transfer time (24FPS), such that the user could use the laptop/PC screen as a hollywood-movie-grade viewfinder. Now we have to throw away the phone's camera app and write our own that bypasses the save-to-file step and streams the image data directly to the PC over UDP, perhaps RTP. Since the raw camera data takes too long to send over the network, we may also need to visit image compression techniques. Some techniques will be faster on certain phones, so we'll need a heuristic compression algorithm selector to make sure we're correctly optimizing for spending the least amount of time in the combination of compression+transport. On the PC side we'll need an RTP client to receive the data, but also a method to paint the frame buffer data directly into the viewport (no more loading files into an ImageBox control). We may need to explore the gains from painting the image as an OpenGL texture. At this point, our engineering budget has exploded! We're talking about hiring telecom engineers with experience tuning VoIP stacks, digital signal processing and codecs, and video game engine experts to help with low level graphics in OpenGL/DirectX. Perhaps we'd need to retain Carmack himself to advise.

If you have a business opportunity worth $1M/year that relies on moving photos from your iPhone to your Macbook, spec'ing 10s is a great way to get your business started leveraging existing technology. Spec'ing 1s might be acceptable to give your app some polish once you're established, and if you're not FAANG and not Carmack himself, spec'ing 10ms would be a great way to kill your business regardless of size.


"Premature optimization is the root of all evil" is one of those programming sayings that's very often misunderstood.

It does not mean that performance doesn't matter or that you shouldn't think about performance.

What it means is two things:

(1) It's usually a waste of time to tinker with small optimizations until you've made things work and until you've profiled.

(2) Don't compromise your design or write unmaintainable code out of premature concern over performance.

The second is the more important and subtile meaning. It means don't let concern for performance hobble your thinking about a problem.


Why does this post have so many points? How is that article useful to most readers? I sometimes don't understand HN.

Performance matters... the article talks about a case that most of us aren't confronted with.

Beside, it concludes that in spite of paper being slower, it's still prefered, so how does that prove that "performance matters"?!

Should an article be upvoted just because an article is about life and death and has a catchy (pretentiously) laconic title?


Paper is preferred because it has better performance than the software, which is the thing that people should think about. Writing software that can outperform paper shouldn't be hard to do, but apparently it is.


It's about trade-off. You improve performance at the cost of something else you're not working on. Without the context, it's hard to tell if that cost was justified. I.e. maybe there were 20 critical bugs and the team made the call that perf. was good-enough and it was better to stabilize the software. Most often than not, you can always improve performance and drawing the line is hard.


My first tablet was kinda slow. At least slow enough that when it got too old, I wondered if I even wanted to buy another tablet, because I rarely used the first one.

Then I bought a new one, which was faster and more responsive and my tablet usage completely changed ;-)

So I can personally testify that performance is a most relevant part of the user experience.


I don't think the word performance is key, but rather /user experience/.


Sometimes you need to take a step back and consider the bigger picture. Is it really necessary to make an EMT fill out a 100-field form?


B


Always remember the Doherty-Threshold.

"Productivity soars when a computer and its users interact at a pace (<400ms) that ensures that neither has to wait on the other."

https://lawsofux.com/doherty-threshold


Quote: It wasn’t even that slow. Something like a quarter-second lag when you opened a dropdown or clicked a button. But it made things so unpleasant that nobody wanted to touch it.

I'm skeptical that's the reason they didn't use it unless it's a high-volume application. Like everything in IT, it depends.

A high-volume app definitely needs attention to response and performance because employees will be wasting tons of hours if it's not. That's why primary applications should probably be desktop client-server applications, where it's easier to control the UI. But that's typically only roughly 1/4 of all apps used by an org.

Something used only occasionally, such as for customers with special conditions, a somewhat sluggish web app is probably fine. If it's easy to make it snappy, great, but not everything is, due to our annoying web (non) standards[1]. If it takes 100 hours of programming to reduce 50 hours of UI waits over say a 10 year period, then the org doesn't benefit. 10 years is a good default rule of thumb duration for software lifetime. Your org may vary. Remember to include program maintenance, not just initial coding.

In short, use the right tool for the job, and do some analysis math before starting.

[1] My "favorite" rant is that web-standards suck for typical work-oriented productivity applications, and need a big overhaul or supplement. Nobody wants to solve this.


I'm sorry; are you saying that electronic patient care reports are not a "high volume application", that they should be "desktop client-server applications," or that the statements by the users about why they don't use the system should be disregarded?


I don't have the usage & programming labor stats to make a definitive statement on that particular application including the size of the org using it. My main point was that "it depends" and should be subject to further tradeoff and business analysis rather than "X is always bad".

A quarter-second lag does not sound like a significant problem to me, and may require a lot of IT funds to remove. Often such is only justified if it's a high-use application. Would you agree that 100 hours of extra programming to save 50 hours of data entry time is usually not worth it (all else being equal)? If so, let's start there and slice into further factors.

Please don't give me bad scores for trying to be logical and rational, people! Resources are limited, orgs have to spend with care. Sure, it's nice job security for us IT people if orgs spend big bucks to make all apps snappy, but from a business and/or accounting standpoint, it could be the wrong decision. You view the world different when your money is on the line.

If you know a way to make all apps cheap, good, AND snappy at the same time, let's here it! I'm all Vulcan ears.

Note that one can make web apps snappy, but there are often maintenance and/or creation time tradeoffs for doing such. A few well-run shops do it well, but most shops are average-run.


Correction: "let's hear it" instead of "let's here it". Modnays.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: