Hacker News new | past | comments | ask | show | jobs | submit login
The Ethics of Web Performance (timkadlec.com)
191 points by gmays on Aug 1, 2019 | hide | past | favorite | 163 comments



The article seems to put the burden of performance on the developers... but I've been in situations where no matter what I did in favor of performance, the efforts were always negated by the installation of third party marketing tools. Things like retargetting platforms, tracking scripts or even recommandation systems. And let's not talk about the unreasonable client expectations about features per page...


I have unfortunately seen firsthand projects where the marketing crap was over 70% of the page load time. Put another way, the weight of the tracking software was 2x the cost of generating the entire rest of the page.

Do you have any idea how soul crushing it is to work your ass of on site performance, to have people passive-aggressively question the team's abilities when the numbers are even 100ms too high...

And then when you're done some nosey assholes go and add over 2 seconds to your page loads just so they can do something ethically questionable. And the fact they waited til the end to add it on just proves that they knew it was going to be bad news but they did it anyway.

It sucks. It kills trust for the organization, and it kills respect for everyone in the decision chain.

My coworkers and I joke from time to time about running an ethical advertising network with reasonable software. Well, they're joking. I don't think they know I'm not entirely.


> My coworkers and I joke from time to time about running an ethical advertising network with reasonable software. Well, they're joking. I don't think they know I'm not entirely.

How would that work? And more importantly, how would it stay ethical? I imagine the very basic idea would be to not need to track users, which removes the need for the huge swaths of Javascript that pollute the internet. But would that be profitable?

I'm curious to know what thought you put into this so far.


This is exactly what we built and have been running for over 2 years. CodeFund [1] is an ethical advertising platform that is open source, does not permit or perform any tracking, and only displays ads based on the context of the site. We load our ads async so there’s no slowdown of the page. We are also open with our finances. We exist to fund open source maintainers, bloggers and builders.

[1] https://codefund.io


Yeah that’s the question isn’t it. Let’s say I run a B Corp, or a non profit. Do I really want to associate with some of these networks?

Or on the flip side, I run a website. I want to make some money to pay my expenses, but do I really want to inflict these ad networks on my readership?

It’s not the sort of project that’s going to make you rich. It’s the sort of project that lets you sleep better.


I especially hate those tools that allow the marketing department to change the page, e.g. Adobe Target. They work client-side, making their changes after the html has loaded.

They don’t work well with universal/isomorphic web apps, because they have no way of telling when hydration occurs, so they can really mess a page up if they do their filthy business too soon.

But assuming that doesn’t happen, there’s still the matter of what they call “flicker”, which is a silly word that means the original content is visible initially, and then marketing’s changes kick in. To avoid so-called flicker, a page-hiding snippet is inserted into the page, making the entire page transparent until the tool modifies the page. These tools are often blocked by ad blockers, so the page will either remain blank, or will be restored by the page hiding snippet after a few seconds.

You can make a blazingly fast website, and then marketing comes along and takes a big dump on it.


From Biter experience the sort of sites you have to do remedial work on are not blazing fast some common problems I see are :

using unoptimised images

adding massive amounts of javascript

messed up canonicals

internal links requiring redirects to add or remove trailing /

terrible design choices - oh yes lets put 10 products on a single page


Sorry, but no.

You can't just blame it on the marketing department.

Yes it is a business decision, but, you have the power to demonstrate that slower pages lead to less clicks and hurt the bottom line.

If the FT's developers can manage to sell this, then so can you, We, as developers can't live in a bubble. We are part of the business and we need to steer the business as we see fit.


OK, you go to your boss and tell him that because of the marketing team's scripts the app takes twice as long to load. Your boss on other hand says: "Who cares? This is how we make money". You try to explain that lower income people with slower/lower end devices can't access the content. Boss asks how much money are we losing because of this. You have no answer. What do you do?

Almost standard HN answer when it comes to moral programming questions is to stand by your principles and resign immediately, but this usually comes from people who haven't had to deal with paying bills or become homeless. Even then if you don't do it someone else who needs the money will.

What would you KaiserPro do in this situation?


> Boss asks how much money are we losing because of this. You have no answer. What do you do?

This is my point. If you can't answer the first and most basic question with hard data, you're fucked. There is literally no point going to one's boss and saying "this is morally wrong" unless you can make a business case. (or your boss has the same world view as you.)

If you can't argue that your change is going to move a KPI for the better, you've lost your argument.

I know lots of "tech" people like to waft about thinking that they are above such mundane things as buisness KPIs (OKRs if your management team has swallowed google's rubbish) Thats why they think they are such maligned geniuses.

Tech is an enabler for a business to do business. Its just another tool, like a warehouse, van or envelope. If you want to make sure you're not going to get slung out during the next re-org, you have to make sure that you are performing to what the _buisness_ wants from you. Thats a two way street.

for example, marketing come up to us and say: "we need a microsite for the conference thats coming up"

Sure I could make a K8s cluster to host a website. But I could do it quicker and cheaper by using a PaaS(or Saas), trumpet the speed and clarity of my execution, underline in triplicate the capex cost I've saved (ignoring the opex, because they won't find out about that till next year, plus it's not my bill, I'm measured on staff costs.)

Is it cutting edge? is it fuck. Does it get me browny points? you bet ya.

I can then spend those browny points on trying to change things about my working environment that I don't like. As I've made marketing happy, I can work on them to find a better tracking company, or better yet, get them to pay us to develop their BI pipeline.

Sure its not a "pure tech" experience. But I can affect way more change than reskinning the website in yet another framework, following the latest animation style guide from some sadistic nonce claiming to be a UX guru.

So No. I wouldn't quit over this. Because I'm not a Wilde-esq dandy.


> There is literally no point going to one's boss and saying "this is morally wrong" unless you can make a business case. (or your boss has the same world view as you.)

If your boss is worth their salt, reasonable moral objections will be taken seriously. It's management 101. If not, you are probably not going to be happy there anyway.


> OK, you go to your boss and tell him that because of the marketing team's scripts the app takes twice as long to load. Your boss on other hand says: "Who cares? This is how we make money".

What company do you work for that makes more money by causing your app to load twice as slow? I think it's reasonable to expect a developer to measure and present the correlation between performance and revenue.


> What company do you work for that makes more money by causing your app to load twice as slow?

I used to work in e-commerce and in spite of my repeated warnings about performance, no boss cared. Then some SEO consultant said "google wants fast websites". Guess who got blamed ?


>What company do you work for that makes more money by causing your app to load twice as slow?

I'm not sure if you are trolling. Can you seriously not parse out that sentence? It is the ad revenue that makes the money. The app loading slow is a side effect. No one (that I know of) is making money by deliberately making slow applications, just think for a second that makes no sense. However a lot of applications make money from ads.


> I think it's reasonable to expect a developer to measure and present the correlation between performance and revenue.

I'll bite. Do you really do this? If so, I'm seriously curious about how you could conclude a straight line causation (not just correlation) between an increase in revenue by an increase in performance? Potentially for a large scale site like Twitter, Facebook where performance means less eyeballs, but even then it's just a correlation.


Graphs.

I was here https://medium.com/ft-product-technology/a-faster-ft-com-10e... when they did this.

Its easy to find graphs that support your conclusion.


You don’t need sound reasoning that could withstand scientific scrutiny. Management won’t be able to distinguish valid from bogus data. Just p-hack until it fits your conclusion - they’ll eat it up in no time.


Where I work we already have people doing this, so I don't have to. If your project doesn't tie directly to revenue it surely has KPIs which would be affected by performance.


Been there, done that. Alas, with poor results. Now I'm happy in non-business activities.


This is so true. You can maneuver within client expectations, and you certainly can optimize every particular features but marketing tools are often non-negotiable.

From the user's stand point - they are pure overhead, too. For my own project, I decided to ditch them completely. Including the analytics. Well, it would have been nice seeing all the nice plots but at the end of the day they just show vanity metrics. The real impact is immeasurable.


There should be analytics solutions that don't put the burden on the client, server-side products. Or lightweight ones that mainly track button clicks and navigation, instead of e.g. recording all mouse and keyboard interactions.

I want to think basic analytics has a very low impact on performance. Ads, on the other hand... Which all inject their own analytics as well.


There are analytic tools that run on the log files. If that is to high load on the server I'm sure they can be run offline by downloading the logs to your local machine.


I'm an outsider to these kind of processes: how is a 3rd party marketing tool added to a site at a BigCo?

I'm still fairly certain it involves developers, who, in theory, should have pre- and post- number on performance, visits, etc. Or am I delusional?


I believe the point is that developers don't have a say in whether some 3rd party lib gets added. Some PM gets a lunch with a vendor, they are being sold on the miraculous technology that will totally disrupt everything and that's that.


> I believe the point is that developers don't have a say in whether some 3rd party lib gets added

And even more so with tools like tagmanager... not only you have no say, you also don't know. And one day the site breaks or is too slow to be useful and someone puts the blame on you.


They do have a say, they just value keeping their jobs more than doing the right thing. That isn't ridiculous, but they need to stop blaming everyone else (who are essentially doing the same thing) for the problem as though they aren't contributing.


With that understanding of "having a say", you're having a say in being kidnapped, too. "Hey, you could've fought the guy with the gun... you just valued your life more than doing the right thing".

It's not a useful definition imho.

It's not that there aren't any developers contributing to the problem, those building the fat 3rd-party scripts definitely are (though they're likely under different constraints), but the argument for "you're contributing if you don't use your credentials to secretly edit out the javascript that loads that tool and get fired" is pretty far out there, I think.


Honestly you can't see the difference between being held at gunpoint and voluntary employment? It's not like web developers are working low-skill jobs for minimum wage and scraping by here.


I see the difference, I just don't believe it's a reasonable argument. You don't make a difference if you quit over performance issues, your personal position has no weight in the decision-making process, you have no say in the matter.

Yes, you can make the argument that "well nobody actually has the power to decide, they are all being forced by something", but it gets very weird quite quickly when you're looking at the people making the decisions. Whenever it gets into "but is there actually such a thing as free will" territory, I feel like the usefulness of the definitions used isn't there.


> your personal position has no weight in the decision-making process, you have no say in the matter.

Not on an individual level, no, but collectively it could. I'm not advocating that webdevs organize to boycott companies bloating things because it isn't really that big a deal, but if we imagine the ethical situation were much more important then that's how they could go about changing the situation.

Of course, it's much easier to pretend that they have no option at all than to admit that, compared to being slightly less well off, it really isn't that big a deal to them. It lets them virtue-signal that they care without accepting any blame for the situation, while also conveniently allowing them to point the finger at their mortal enemies in marketing. Nevermind that marketing needs to put food on the table too.


That's generalized to the point of uselessness, isn't it? "You're contributing to imperialist wars" - "No, I'm not" - "well, not personally, but if everybody on earth stood together and said no, nobody would be going to war".

You're very right that marketing needs to put food on the table too, that's what I hinted at regarding being forced. There's a very clear difference though, in deciding what to do (marketing) and not deciding what to do (devs, at least those where they aren't involved in the decision, which is a lot).

Whoever is in charge could very much make a different choice in this particular manner, a developer getting a task cannot. What a marketing person can't do, similarly, is say that they won't do their job. How they do their job is mostly up to them, just as the developer tends to be free in choosing how they get that 3rd-party-lib into the site, as long as they do it.


> I'm still fairly certain it involves developers

It doesn't. Once you have Google Tag Manager added, the marketing team can add scrips as they like on an admin interface. Point, click, murder.


He actually says that devs try to do their best but the business models make it hard. I can very well resonate with that. With decent effort we could make out site respond in a tenth of the current time. But no one ever asks for it nor will give you the opportunity. Even performance centric companies like Google are now doing an insanely bad job. Business applications perform bad even on modern desktops. They will probably try to keep the user facing parts of the ad machine fast, but everything else is not what I expect from a company with such mindset. They of course never did anything to improve the state of ad tech.


Google ad bidding requires 200 milliseconds to run the ad auction. I could send a packet all the way to Australia and back in that time!

If you're trying to make a page load instantly (100 milliseconds - the edge of human perception), it isn't possible with Google ads, even the text only ones.


Although 100ms would be a decent loading speed compared to todays standards, 100ms is not the edge of human perception. Tm100ms equals 2.5 frames in 25 fps and even a 1 frame blank image (40ms) is noticeable at that speed.

In my experience the edge of visual perception is somewhere below 10ms


I am considering the 'click on thing, see result' perception loop, which is far slower than the 'see one thing happen and then see another thing happen, and not be sure if they happened at the same time or sequentially'.


Even the difference between ~16ms and ~33ms is noticeable to most people for the "click to see result" perception loop -- that's the difference between 60fps and 30fps.

100ms is a glacially slow reaction time in comparison, and it's a bit sad that most of the tech industry is setting their performance goals so low.

(In fairness, I suspect most people wouldn't be able to verbally express why the 16ms feels better to them than the 33ms, but would subconsciously notice the difference.)


This is acknowledged at the end of the article, in a section called "Performance is an ethical consideration, now what?"


I recently emailed the Guardian (UK newspaper) to tell them that an element of their website caused one of my CPU cores to spike to 100% utilisation while it was in view. It happened on my laptop, work desktop and phone. If you're a visitor, it was the graphic next to the podcast banner which looked like a sound wave.

They've removed the element now and sent a very nice message saying they'd investigate.

I really think/hope that the next "big thing" in software engineering will be energy efficiency in some form or another.


Oh praise be. That's been screwing with me for months and I couldn't find a bug report page. Thank you anonymous Internet person!


> I really think/hope that the next "big thing" in software engineering will be energy efficiency in some form or another.

The future is here, and it's called uBlock Origin.


Yup, that thing completely killed scrolling perf in Android Firefox.


I think that the lumbering hogs we call publication web sites these days are making sure that you as a reader are paying a cost to use their otherwise free web site; paying a subscription is better, but no one will do that. People on mobiles pay a tax that people on desktops don't, and people on bad connections in poor areas pay a tax that people in rich fibre-wonderlands don't, and yet energy and money is wasted on transferring garbage that nobody wants.

I seem to get dismissed whenever I suggest this, but these web sites could instead try another approach: remove all the garbage, have a beautiful, clean web site, and implement artificial rate limiting on all connections to it. If you want a super fast experience, pay a subscription! What are the upsides?

* Less bandwidth and energy and mobile phone battery is wasted.

* The site will still load in the same time that people are normally used to anyway.

* Rich people and poor people get the same experience; you don't pay a poor people tax if you are poor. Equity!

* Mobile phone users don't pay a tax, either.

* Meanwhile, Rich people have disposable income; they should be spending money on magazines and newspapers that people are producing anyway, to show their support, but they don't, so here's an incentive. They also indirectly support the poor people with news/media, who can't afford to pay a subscription (or wouldn't benefit anyway).

Isn't anyone trying this model?


I don't think this would work for a simple reason: Google. It's a large source of traffic for news sites, it punishes sites ranks for being slow and it can cache pages and preload requests, rendering throttling shenanigans somewhat useless.


For this to be a tax someone has to collect--as it is, it's just a straight up degraded product to nobody's benefit. At least taxes get used for something.


There's no model yet.

If you rate limit too aggressively users will interpret it as breakage and navigate away from your site.

If you don't rate limit aggressively enough users won't notice the difference.

You haven't given any evidence that there's a third possibility.


Here's an idea for a model:

Use logarithmic decay of speed down to the rate limit, the first bit of browsing is a fast preview. As you continue browsing a pop up explains that you're being rate limited and that you can pay if you wish to resume unlimited browsing speed.


if you make the website simple enough that it loads quickly on a bad connection and it loads the same content on a fast connection, why would anyone pay extra for it to be even faster?


I think the idea suggested is that the website could load quickly but doesn't, instead artificially loading very slowly regardless of connection speed.


I know it's comparing Apples and Oranges, but: "War and Peace" by Tolstoi is 1.9 MB. A Twitter profile page is 2.5 MB, and that is already optimized (Posts below the fold are not loaded).

Websites are bloated - so I am glad the ethical aspect is getting more and more into the focus.


I have a rule of keeping every page on wordsandbuttons.online under 64KB. And no dependencies so, apart from occasional pictures from wikimedia, no other hidden costs for the visitor.

The number is of course arbitrary, but surprisingly it's usually quite enough for a short tutorial or an interactive explanation of something. And I don't even compress the code or the data. Every page source is human-readable.

So it is possible to have leaner Web. it just requires a little bit of effort.


With the help of transport compression like gzip, the total size can still be reduced by almost the same amount even if you don't minimize it.


I once wrote (maybe 15-20 years ago?) an html output processor that tried to make it more compressible while still producing the exact same output. It did things like removed comments, transformed all tag names to lower case, sorted tag attributes and canonicalized values, collapsed whitespace (including line feeds).

And some more tricks I've forgotten (some DOM tree tricks, I think), mainly to introduce more repeated strings for LZ and unbalanced distribution (=less output bits) for Huffman. In other words, things that help gzip to compress even further.

Output was really small, most pages were transformed from gzipped sizes of 10-15 kB to 2-5 kB without graphics.

The pages loaded fast, pretty much instantly, because they could fit in the TCP initial window, avoiding extra roundtrips. Browser sent request and server sent all HTML in the initial window even before the first ACK arrived! I might have tweaked initial window to 10 packets or something (= enough for 14 kB or so), I don't remember these TCP details by heart anymore.

I wonder if anyone else is making this kind of HTML/CSS compressability optimizers anymore. Other than Javascript minimizers.


They are! Around five years ago I wrote a CSS minifier (creatively called CSSMin, available on GitHub, and still in use at the company I work for) which rewrote the CSS to optimise gzip compression. Although it never really took off, I think that some of the lessons from it have been rolled into some of the more modern CSS optimisation tools.


It's important to understand minimizing does not necessarily produce the most compressible result. You need to give LZ repeating strings as much as possible while using as few different ASCII characters as possible with as unbalanced frequency distribution as possible.


I wrote (well, expanded) a similar tool for compressing Java Class files. I had a theory that suffix sorting would work slightly better because of the separators between fields, and it turned out to be worth another 1% final size versus prefix sorting.


I've found a cheap trick to compress Java software: extract every .jar file (those are zip archives) and compress the whole thing with a proper archiver (e.g. 7-zip). One example from my current project: original jar files: 18 MB expanded jar files: 37 MB compressed with WinRar: 10 MB

And that's just a little project. For big projects there could be hundreds of megabytes of dependencies. Nobody really cares about that...


It's a tradeoff; in a lot of cases, the size of a .jar doesn't really matter because it ends up on big web containers.

It does matter for e.g. Android apps though. But at the same time, the size of the eventual .jar is something that can be optimized by Google / the Android store as well, using what you just described for starters.

I know Apple's app store will optimize an app and its resources for the device that downloads it. As a developer you have to provide all image resources in three sizes / pixel densities for their classes of devices. They also support modular apps now, that download (and offload) resources on demand (e.g. level 2 and beyond of a game, have people get past level 1 first before downloading the rest).


It's true, but this was brought up as an anecdote/parallel.

Attributes in html have no fixed order, and neither do constants in a class file. There are multiple ways to reorder them that help out or hinder DEFLATE.

And also I was compressing the hell out of JAR files because they were going onto an embedded device, so 2k actually meant I could squeeze a few more features in.


There’s a lot of redundancy between class files in Java and zlib only has one feature for that and nobody uses it. It would require coordination that doesn’t really exist.

For transport, Sun built a dense archive format that can compress a whole tree of files at once. It normalizes the constant pool (a class file is nearly 50% constants).

Many Java applications run from the Jar file directly. You never decompress them. But you also only see something like 5:1 compression ratios.


That's extremely interesting. Would you happen to still have the code lying around? Or would you recommend some itroductory materials on this topic?


I might still have it on some hard disk that's been unplugged in storage for ages. But probably long since lost. I wrote it by trying out different things and seeing how it affected gzipped size.

Just use some HTML parser and prune html comment nodes and empty elements when safe (for example removing even empty div is not!), collapse whitespace, etc. If majority of text nodes is in lower case, ensure also tags, attribute names etc. is as well. Ensure all attribute values are same way, say attr=5, but not attr='5' or attr="5". Etc. That's all there is to it.

It saved a lot already as a result of whitespace collapsing, which also removes high frequency chars like linefeeds, etc. leaving shorter huffman table entries for the data that actually matters.

Study how LZ77 and Huffman works.


Wow! That sounds fascinating.


If your page is static, it's even worth trying something like zopfli or advancecomp to maximise compression ratio in ways too expensive to do "online".


That's obviously true, however a minimized version will require less memory and slightly less cpu-cycles* to compress and, on the client side, it requires slightly less resources as well

I do realize how insignificant difference that would be * then again not much of a difference since the DOM tree itself would consume orders o magnitude more mem.


Probably not less memory. zlib is based on a design that dates back to an era where you might only have 250-350 kilobytes (not a typo) of RAM to work with, and it was never really extended beyond that. It has a window it keeps in memory and if your file is longer than that window, you hit peak memory and stay there (you might actually hit that window immediately. I've forgotten how that part works, but some chunks of memory are pre-allocated).


That's really DEFLATE, the sliding window of standard deflate is 32KB. Both compression and decompression have some overhead (compression more so as you might want to have index tables and whatnot to make finding matcher faster) but even with the worst possible intention there's only so much overhead you can add.


Level 9 uses 128k, if memory serves.

We’re talking about HTTP here, and gzip is the only reliably available compressed transport encoding.

On the plus side, because it is so resource constrained you have had it on your phone for ages, and might even see it on IoT devices.


> Level 9 uses 128k, if memory serves.

That's probably a misunderstanding or misremembering: the DEFLATE format can only encode distances of 32K (the proprietary DEFLATE64 allows 64K distances but not everything supports it).


Have we provided any tools that managers are capable of using to see page weight and explore on their own? Or are we making graphs and showing them charts?

Maybe we are missing a plugin with a mileage gauge.


As an Engineering Manager I had Engineers showing me graphs and charts, but I also know how to look up things on my own. But I don't think either case is wide spread.


The fact that you are on hacker news suggests you are the Engineer turned Manager instead of tech enthusiast who went to business school. Yes?

People are a little more reliable when they have the option to figure things out for themselves. I’m not sure entirely why that is. But if pressed, I’d conjecture it’s something to do with not wanting to be seen asking subordinates stupid questions, to the point of preferring to be ignorant or half blind instead. “Keep silent and let them suspect, or speak and remove all doubt.”


Imagine "War and Peace" being written without descriptions of characters or places, just bland facts about what happened. It would probably go under 60KB! Damn this Tolstoi guy put in so much fancy descriptions and dialogs in there, we could do better!


I think this is a very thoughtful article. As web devs we are much more likely to be developing and testing on beefy machines and beefy Internet connections than the general population is. Let's remember that we're building this stuff for other people, not just for us. :)

I'll take one issue with one thing in the article (emphasis added):

> The cost of that data itself can be a barrier, making the web prohibitively expensive to use for many without the assistance of some sort of proxy technology to reduce the data size for them—an increasingly difficult task in a web that has moved to HTTPS everywhere.

But: HTTP/2! It's only available over HTTPS, gives really significant performance improvements and is supported on Chrome on those low-end Android devices the article mentions (and basically everything browser other than IE11 on Windows < 10 and Opera Mini).


The "proxy technology" sounds more like Opera: it's not simply compressing the existing stream, it does things like aggressively recompress your images at lower quality or even size. HTTP/2 will deliver your 2MB hero image using 2MB of data. A compressing proxy will deliver a slightly uglier 100kb version.

Oh, and save a lot of data by never downloading the ads.


I remember many years ago when i had a 3G internet connection (which sometimes even had to work via 2G networks) for my laptop, the service would automatically recompress images to lower bandwidth and speed up times. To enable the connection i had to use a custom tool (it was essentially a fancy terminal - AFAIK it communicated with the 3G dongle using AT commands) which also had an option to adjust the (ISP-side) compression (i could disable it, leave it at default or make everything look like garbage but load fast...ish :-P).


Not a lot of people know there are some great utilities for emulating low/intermittent bandwidth and slow CPU on chrome


I would argue if you make a well designed low end website h2 wouldn't make as much difference.


Anything that becomes 19 seconds to become interactive on a flagship phone should be classified as faulty. It's not just unethical for people, but the environment. It's badly written code that is basically turning your cpu into a space heater.


Not just that. Anything that takes 19 seconds, will probably be refreshed after 10 and closed after 15 seconds.


That's being incredibly generous too.

Google: "53% of mobile users abandon sites that take over 3 seconds to load"

https://www.thinkwithgoogle.com/marketing-resources/data-mea...


Similar logic applies server side too. If your backend is inefficient then you may well be consuming orders of magnitude more power per query. While I take these numbers with more than a pinch of salt, I've seen comparisons that suggest that a backend written in ruby/python/php may be over 10x less efficient than one written in Java/C++


It's really not about languages. Your code has to be perfect or trivial for the language to matter.

If it's easier for you to write better algorithms in Python than in C++, than your code will probably be faster while written in Python. Because better algorithms will easily get you further than 10x in terms of performance.


> If it's easier for you to write better algorithms in Python than in C++, than your code will probably be faster while written in Python. Because better algorithms will easily get you further than 10x in terms of performance.

In practice I have not often seen this pan out - if you've got the most common case of a GUI app with tons of callbacks which update your UI in real-time, there isn't one algorithm to optimize anywhere, but instead you die a death of a thousand cuts because each callback will be marginally slower than what it would have been in C++.

Also, it's much easier to control allocations & system calls in C++ (or other native languages for what it's worth) than in Python - and in my experience preventing those is an even better pathway to getting frames rendered in less than 5 milliseconds (which is what you want to do if you have any respect for your 144hz-screen users)


I've rarely found that GUI code is ever the bottleneck. Usually it's only a problem for mousemove handlers and code that runs directly in the event loop. Most GUI apps don't even have those, unless you're writing a game.

The big problem with Python GUIs is usually that the GUI framework is often written in Python. (Or it's that things like network & parsing code gets written in Python.) The framework does a lot more heavy lifting than the app does - in particular, it's responsible for converting the system events into clicks, drag&drops, submissions, etc. for the app, and for rendering components into bitmaps. If the framework is written in C++ and just exposes a Python API, there's no problem. That's how wxPython and PyQT do it.


Well, yes. GUI usually don't require much algorithmic work so there's just not much space to loose performance because of bad algorithms.

But I've seen researchers been vastly prolific in Python. They make segmentation or registration algorithms that beet the market, and then they call us (C++ engineers) to make them "fast". Usually we do, since we're trained for that. But it's never orders of magnitude. It's like 2-3 times faster.

And there were even cases where the C++ code appeared slower in the end. Well, we use slightly different math core than the numpy does and sometimes they're doing worse.


The cases you're describing sound like cases where Python is cheating. The thing that allows Python to even be considered useful for any kind of heavier computing is that a lot of standard/popular libraries are FFI wrappers to C code.

It's an excellent deal for scientific computing, where your logic is trivial and almost all work is done in C. But it's not so rosy for user-facing applications, where it's the logic that dominates.


> Python is cheating

Weird to call it cheating when Python's strong intertop with C is a specific design goal of the language.


It's tongue-in-cheek. And also a little bit of jealousy; I wish Common Lisp felt a little less fragile around binding generation and package distribution - then I could have a high-level language that's almost as fast as C (when using SBCL implementation), that could still interface with even faster C code.


Only if you're sure that the particular algorithm you're working on is the bottleneck.

One particularly common performance pessimization is replacing an algorithm with one that has better big-O performance but a worse constant factor, and doing so where the particular O is much smaller than the number of times the function is called. People learn that hashmaps are O(1) while red-black trees are O(log N) and linear search is O(N), so they'll default to using a hashmap for a lookup table, even if the lookup table has 10 items but the keys are ~500 byte strings. (I'm guilty of this one myself, but at least I measured and fixed it.) Then they'll call this lookup in a loop 5000 times.

Using Python or another rapid-prototyping language can be a good idea here if you take the time to profile and identify the bottlenecks, but very frequently you end up shipping it, throwing hardware at it, and then wondering why you're spending hundreds of thousands of dollars on AWS bills. Plus, a really big problem with many common Python architectures is that they use pure Python libraries & frameworks even on the critical path. (Big offenders here include using BeautifulSoup in your 1B-page web analysis or Django on your million-user website.)


Something I made the mistake of saying in interviews for a bit, but now save for beers, is that often the 'hardest part' of the project is cleaning up all of these constant factor bits. They're thankless. They're everywhere, and at some point your flame chart looks pretty damned flat. There are no big wins anymore. It's fighting for inches.

(really the 'hardest thing' about software is all of the things that are true but few believes are important. Dropping the Big O can be proven empirically)


Yes, that's all true.

Regarding the algorithms on small data, I even made a quiz about that: https://wordsandbuttons.online/challenge_your_performance_in...

Of course, Python or C++, if you want performance, you should design with performance in mind. Measurements and profiling are implied.


Agreed. A good point that people often miss. We also are often in constant pursuit of the next feature, so don't have the luxury of the kinds of polishing or optimization that might get some of those real performance gains. I was a big fan of the accidentally quadratic tumblr.

It's important to remember though that not all significant performance gains come from better algorithms, I've taken an industry standard C implementation of an algorithm and improved perf (as measured on production workloads) by 30x by thinking about issues that didn't really concern the original author (in particular cache locality and realizing that the entire problem could be moved into the integer domain)


Most apps don't have heavy-weight algorithms that benefit from big-O optimization. So, performance is decided by the speed of common language elements (function calls, memory allocations). It will be more effort, and harder to make secure, but you will most likely see a performance gain when using a systems language. And even more so, C++ has more escape hatches (custom memory allocators, multi-threading) than Python.


I always saw the tradeoff more in terms of developer time: Python is faster to write but pretty much always slower to run (unless you're doing a lot of IO or mainly writing wrapper functions around compute-intensive compiled code).

I don't think a lot of people would actually use asymptotically slower algorithms when using a different language.


Nice! Go write that web framework and let’s benchmark that.


I actually thought about that. Not a framework but an ultra-efficient web-server. Going reductionist, a static serving web-server is just a key-value storage accessible via HTTP.

So I can run a Python script that gathers all the key-value pairs (names and pages), makes an optimized search tree out of them and spawns is out as an LLVM sheet of code. Then I only have to embed it into a trivial HTTP "answering machine", assemble it all under the target architecture as part of the deployment - and that's it.

I have more fun things to do right now, but my hosting contract expires in a year or so so I'd look for the alternatives by then.


> ultra-efficient web-server.

There are a lot of these already. Serving static content quickly is not an interesting challenge any more, most of them will be able to saturate a 10GBE link over a huge number of connections. No, all the interesting effort is in dynamically assembling pages.


While not trying to defend PHP (well, tbh I'm trying but just a little), but there is a phalcon framework ( https://phalconphp.com/ ) which is written in C and works as a PHP Extension. While the rest of the code still is in PHP, at least the framework bits including a lot of boilerplate are much faster that way.


The difference is that the company is not paying (and therefore not tracking costs) for client code, so it has less incentive to fix things.

The extra engineers needed to make a Java/C++ backend need to eat/sleep/consume and have their own environmental cost.

$$$ is a good first-order proxy of environmental cost, and lower server costs probably don't make up for engineering salaries.


> The extra engineers needed to make a Java/C++ backend need to eat/sleep/consume and have their own environmental cost.

Engineers are not VMs in the cloud, you don't provision them on demand. They exist as human beings, and eat, sleep and consume anyway.

> $$$ is a good first-order proxy of environmental cost, and lower server costs probably don't make up for engineering salaries.

If you run it through a heavy low-pass filter to remove all the sales-related price shenanigans. And discount salaries, because taking on an engineer at $X doesn't mean you've just increased your carbon footprint by $X.

You could actually see how taking into account salaries might be a cause of the problem. Say you can hire an engineer who'll get you a basic backend working in Python or Ruby for $100k, or an engineer who'll get you that backend in a more efficient stack - say C++ or Java, or even a more polyglot solution with hotspots optimized - for $150k. In the first case, your yearly costs of cloud servers are $20k, in the latter $5k, reflecting the 5x efficiency difference.

From the monetary point of view, you should take the less efficient option - after all, you're getting $35k ahead. From the environmental point of view, your service uses 5x the energy it should. And you're not the only company with this dilemma. If everyone chooses the cheaper option, then the whole segment of industry becomes 5x less efficient than it should be.


Yeah, this is a good analysis, there are a handful of companies where performance savings in critical services will cross those thresholds, but for everyone else it's simply theoretical.

Unfortunately there are a lot of those "everyone else" services out there so it becomes a death through a thousand cuts concern.

Obviously from an environmental perspective (if that is the ethical concern in question) there are probably higher priorities


> The difference is that the company is not paying (and therefore not tracking costs) for client code, so it has less incentive to fix things.

The incentive is there whether the company recognizes it or not. A price your customers pay matters to your business.


Pretty broad, if we assume that cpu time is the same algorithms running on the same hardware then PHP is faster than both Python and ruby.

However the framework definitely matters here, PHP’s build the world and destroy it on every request makes a big difference.


Most web tasks are completely I/O bound anyway, the significance in making the few business logic decisions is minimal.


I have seen plenty of web tasks that should be I/O bound, but were not because of inefficient coding. It's not the business logic, but the post-processing performed by devs that don't know how to write proper database queries(or designs) that costs power.

My favourite example: slurping an entire category table on each page load and then using an O(n^2) loop to rebuild the navigation tree. On a cold cache, they had request times (server-side) of 7 seconds.

Rewriting to a recursive CTE brought the page load time (client-side) down to below 100 ms, even on a cold cache.


I wouldn’t group Java and C++ together like that.

Java is faster than JS but let’s not kid ourselves.


When you can be just as productive as a web developer in c++ let me know.


> When you can be just as productive as a web developer in c++ let me know.

I used to work at OkCupid. The entire web application is written in C++ [1] and I was quite proficient with the language even before joining the company. Then I joined a medium sized game studio where C++ was also the most common language, and as you can guess we also decided to built the game landing pages and user management system using C++ with Boost and an in-house template system.

I am not OP but you asked to let you know if someone is a productive web developer in C++, well, I am.

[1] https://github.com/OkCupid/okws


Let's skip this and go right to web development in assembly:

https://board.asm32.info/

The author claims that it's only twice less productive to write in assembly than in high-level languages. But the performance gains are overwhelming so it might very well pay back the effort.


Assembly? Thats for amateurs, we should move right to FPGAs


To cut this short (own silicon, sand, etc.), we should just create a universe first.

FPGAs are fun, though. Saving 12 microseconds (assuming gigabit and "standard" frame sizes) of latency by sending ethernet frames (that contains frame data CRC!) before you even have all of the data to send and then modifying data at the end of frame to match with whatever CRC we sent ~12000 bit times (= 12 microseconds) before.


That seems extremely interesting to me! Would you care to tell any details about that or is it covered by NDA? I'm guessing that's something HFT related?


This was just for fun (tm).

I know HFT guys pull of tricks like these, but no, this is either not difficult to pull off on an FPGA nor is there any NDA. Easy if you send UDP with checksums off or raw ethernet frames. Receiving party will of course need to ignore the last 4 bytes needed to make the CRC computation correct.

Would be interesting if anyone managed to do this with TCP. If it's even possible to get both TCP and ethernet frame FEC match in real time and ideally somehow even mask the data from the recipient. Probably not possible, but... who knows.


Fair point. I believe productivity is a major factor in most performance issues. Write highly optimized code that's very fast and as small as possible or pull in jquery and a dozen libraries? It's pretty obvious which one will get the job done more quickly.

And once the job is done, optimization often is off the table, because the next features await. I work for clients that are making lots of money with their sites, and not even then are they willing to spend pennies on the dollar to optimize those pages.

Maybe the highest impact you can get is working on most-used parts. Make Wordpress run 3% more efficiently and you'll have an enormous global impact. Make your own site run 3% more efficiently and it won't register globally, unless you're Google, FB etc.


It's just good developers are hard to find.

Web gurus will tell you that web is slow because the specific sites are written badly.

While Linus has told that C keeps bad developers away from the kernel.


It is a matter of tooling.

Using something like C++ Builder with its Internet VCL component libraries it is hardly any different from using Java or .NET stacks for Web development.


Indeed, there are many factors to consider here. In a sense that is what makes discussions like this interesting!

I suspect that in some respects that we already optimize for this as an industry, we might have application logic in python, but the database is presumably written in a systems programming language.


I've been a professional C++ developer for years and I absolutely would not do this. The security risks are just too high, not to mention the build times.

I'd probably default to C#, which can be decently fast and has a managed runtime.


I'm really glad that's becoming a part of public discourse.

Of course, poor performance is an ethical and economical issue. As the latter, we've been largely ignoring it in PC era (as a software guy, you don't pay for the hardware you're wasting so why should you care?) and now it's starting to be a concern with the cloud.


Thank God _somebody_ is talking about this instead of shutting down every suggestion of speeding things up with (their misunderstanding of) Knuth's famous "performance optimization is the root of all evil" quote.


I think positioning this as an ethics issue will not have the impact that I think it should.

We should really be making the argument on a capitalistic basis. (stay with me here.)

I was at the FT when we were re-designing the site. The page was lighter than the dailymail (but then most things are) but not as fast as skynews. (it loaded within a second with pictures on a slow desktop)

The marketing team really wanted more tracking, and the advertisers also were insisting on inserting _another_ three tracking systems.

So the developers (and product owners) pushed back. The writeup is here: https://medium.com/ft-product-technology/a-faster-ft-com-10e...

There was a clear correlation between load time and dwell time.

Which lead to the undeniable conclusion that tracking stuff cost the business more money than the extra info brought in.

The take away from this is when ever you are making a case for business change, you have to argue that _your_ change will make the business better, using the metrics that the business understands.

The skill is making your moral choice look and smell like a money making/target hitting opportunity.


> 17.6 million kWh of energy to use the web every single day.

17 million sounds like a lot but it's nothing.

In Ireland there are 2,878 wind turbines producing 30% of our electric, 47 million kWh of energy every single day more than enough to power the worlds internet even if you use the larger numbers in this article.

This "Wasted energy" is a non issue.


Performance is part of accessibility. I haven't realized this earlier, but now it's clear to me. I think accessibility is much more than what we believe it to be. It's about allowing as many people as possible to access our services. This, in my opinion, is a moral issue, and I think not doing that is just plain unethical. Making a website that's not WCAG compliant, i.e. doesn't work well for disabled people, one that is not available in specific countries or for specific age groups is bad, but making website that wastes resources is bad too. If you're doing one kind of accessibility because moral issues, you should be doing the other kinds too.


The state of web page performance is pretty bad and could be so much better.

In 275 tests of 75 news articles the average page size was 3.6MB, made 345 requests, and took 46 seconds to load (on a ‘3G’ connection using WebPageTest.org)

More info and date: https://webperf.xyz/

Some publishers can load in seconds (with advertising etc) so there is little excuse. We know how to fix this.


Investopedia, the fastest site profiled at webperf.xyz, still places at least image 4 banner ads. So it's possible to support yourself with web advertising and still be fast.


The cost numbers are suspect. The paper states:

> Our major finding is that the Internet uses an average of about 5 kWh to support the utilization of every GB of data, which equates to about $0.51 of energy costs.

Which I read as equating what's on storage -- A GB of data in a datacenter. Somehow the articles runs with this as a GB downloaded by a user, which is so bad, it's hilarious.


I get it and I agree, but my assumption has always been that we're all trying to write the most efficient code anyway right? (Maybe with a few exceptions, like intentionally slowing down auth to prevent brute forcing passwords). But this article is saying that if I don't, I'm not just a bad programmer, I'm a bad _person_ too. Hm.


Hopefully that's not the way it comes off! The entire last section of the article is my attempt to make it clear that this _doesn't_ happen because we're bad people.

> So clearly, folks who have built a heavy site are bad, unethical people, right?

> Here’s the thing. I have never in my career met a single person who set out to make a site perform poorly. Not once.

> People want to do good work. But a lot of folks are in situations where that’s very difficult.

> The business models that support much of the content on the web don’t favor better performance. Nor does the culture of many organizations who end up prioritizing the next feature over improving things like performance or accessibility or security.


Overall a good article, but please note that the 5 kWh/GB figure is plain wrong. The source attributes all power consumption of all connected devices to data transfer, and it simply makes up the base figures.

Actual marginal power cost of data transfer is probably around 1/1000 the cited figure. Still doesn't mean wasting data is a good idea.


Or, for our specific purposes, why would I need an expensive device with higher-powered CPU if the sites and applications run well on a lower-powered device?

Precisely that. We need more human work (optimizing code) and less brainless energy spending, i.e. more jobs and less consumption of non-renewable resources.


It seems weird to me to belabor this point.

The arguments make sense, but these are little teacup arguments when we already have multiple barrel-sized arguments in favor of good performance. (I mean generally -- there could be specific situations with different dynamics -- but we're talking generally here.)


Putting the words “Ethics” and “Web” in the same sentence would already make for a perfect April's fool joke but also adding “Performance” is really overdoing it. If I had what the author smoked for breakfast, I might actually die laughing.


> When you stop to consider all the implications of poor performance, it’s hard not to come to the conclusion that poor performance is an ethical issue.

I don't think its clear cut and the article does not make a good argument that "poor performance is an ethical issue"

The two supporting points are roughly:

1. Poor performance will make sites unusable for people without good CPUs / internet 2. Poor performance will waster energy and lifespan of devices

Point #2 is very weak, because the alternative to slow sites is spending time optimizing them which may waste human time. Its not obvious what the optimal ratio is to minimize total waste.

In order for #1 to be an ethical issue, I would require it to only or disproportionately affect those without high speed devices. However, web performance seems to scale decently so improving a sites performance on high end machines by 2 probably also improves it on low end machines by ~2.


Point #2 isn't weak, because it scales with the number of users. Slow sites waste not only energy and money (through electricity bills and device wear) × number of users, but also user's time, again, × number of users. If you want to count human time, then that extra second or three your thousands of users have to spend waiting for your site to load easily adds up to justify getting an engineer to muck around your site with a profiler for couple of hours and identify hot spots, and then more hours (over next days) to fix them.


I wish the "human time" ethical cost was factored in elsewhere besides just software. Things like: redundant forms at the doctor's office; useless fences that prevent you walking the shortest distance; time-consuming preambles when you call any customer service number. Imagine if the amount of time wasted was instead converted to a single number, divided by the average lifespan, and the corresponding number of people struck dead.


The argument for users time is a good point, but I don't think the author was intending that. My point was just that it was not sufficiently argued to be obvious.

However, even that does not make the case that web performance is an ethical issue. It would only do so when the number of users is much greater than the amount of effort it takes to make a site.

If I make a site that I expect 1000 people to look at, then the performance of that site isn't really significant. And not an ethical issue.


There's a categorical imperative aspect to this: a site with 1000 daily visitors (not necessarily unique) is not big enough to make its performance a significant issue. But if all such sites think like this, then it suddenly becomes an issue.


Writing code with performance in mind will get you there without any optimizations whatsoever. It's not about 12% speed up anymore, we're not living in the 70s. It's about not wasting orders of magnitude because of poor design choices.


Until there are clear and real incentives for performant code I think we are largely in a academic exercise.


Yes, that's true.

There are clear and real incentives they are just more visible in the cloud where you pay for your hardware. Or in traditional workstation software where there's also market competition: "do that 10 times faster than the competitors and we'll talk".

It's true that with the Web the incentives are week. Sure, you'll lose some 10% of your potential visitors to frustration, but who cares if the traffic grows expectations are exponential.


> There are clear and real incentives they are just more visible in the cloud where you pay for your hardware.

Unfortunately not clear enough. Compute is cheap enough that company can easily run 10 or 100x less efficiently than it could with little extra work, because the cost barely registers when compared to revenue or salaries.

(Even working in a small company I've seen situations like someone asking, "are you folks still using the XYZ VM, because it's burned a kilodollar this month", and we're all like "a what machine? sure, shut it down". It was needed once, then forgotten about, and it took a while before anyone noticed.)


Your point about #1 doesn't make much sense. Sure the performance might scale the same on both low and high end devices, but two things come to mind. There is absolute thresholds of usability, that doubling the performance might not mean much for the high end user, but it might mean the difference between something tolerable and intolerable on a low end device. That's all well and good if we're only measuring CPU time. If memory becomes an issue, excessive memory usage will lead either to: 1. Complete failure of the website or 2. going into paging/swap, which would very much give you the disproportionate effect you were asking for.


> There is absolute thresholds of usability,

I don't think this is true, for slow websites (taking >6 seconds to load) the effect is mostly linear.

For faster sites going from 1s to 0.5s will be less impactful than going from 6s to 3s. So there are non-linear effects from a psychological perspective, but I think those are mostly relevant at faster timescales.

> difference between something tolerable and intolerable on a low end device.

The only case where I have had things reach intolerable is when the network cannot actually send the data without faulting. Loading for 3 minutes is not actually intolerable.

> If memory becomes an issue, excessive memory usage will lead either to:

I don't think most single webpages will overload a low end systems RAM. I think the lowest system RAM you can have is ~0.5 gb usable and most sites fit within that.


> For faster sites going from 1s to 0.5s will be less impactful than going from 6s to 3s. So there are non-linear effects from a psychological perspective, but I think those are mostly relevant at faster timescales.

I don't see how you can dismiss the psychological perspective. These are real people using the websites. If it wasn't the premise wouldn't be raised in the first place.

> The only case where I have had things reach intolerable is when the network cannot actually send the data without faulting. Loading for 3 minutes is not actually intolerable.

What you've had? What perspective is this being raised from? What about SPAs where you aren't loading the whole website because of lazy loading, but that lazy loading has overheads that don't include network time.

> I don't think most single webpages will overload a low end systems RAM. I think the lowest system RAM you can have is ~0.5 gb usable and most sites fit within that.

Could you please clarify whether you mean 0.5GB total system ram, or 0.5GB left over after system overheads. Consider this: https://gadgets.ndtv.com/jio-phone-4255 or https://en.wikipedia.org/wiki/Nokia_8110_4G Both have 0.5GB system ram, and I can say with high confidence, having the latter, that the main system is slow, I don't believe for a second that that is somehow only bogging CPU and not using RAM. In the developing world mobile systems are often the only computing device people have, which is why there is even a market for such a lackluster phone in the jio phone. The issue, as other commenters have pointed out, is not the core website, but rather the addons that are thrown in because every marketing department and their dog needs to have their own code run on the site. No matter how simply the site is built, optimised for UI, these things will cause problems.


This is a pretty poor refutation to the thesis that web performance is an ethical issue. Your argument is that both points in the article are unimportant or outweighed by other factors. However, by making that argument, you are already tacitly accepting the ethical framework the author laid out.


> Point #2 is very weak, because the alternative to slow sites is spending time optimizing them which may waste human time.

Ehhh the easiest way to get fast sites is combining a fast language and framework. There's like 3 orders of magnitude difference in performance between something like Django and Vert.x

I've seen code slow enough (PHP, 10 req/second) that running enough capacity consumed like 15% of the engineering budget. That could have easily been .1%.

Since developers dont care about hosting cost most of the time, they keep using horrifically inefficient frameworks. I rarely see super slow code because people get called out in code reviews for doing really stupid stuff. It's always the slowness of frameworks that's been a problem IMO.


Don't know why your downvoted - what people need to do is embrace statically rendered sites which pull from a more traditional back end.


Not everyone agrees with that. A lot of people think the efficiency of writing code in favorite language X makes up for the added hosting cost. I just don't believe it with how many choices are available now. Maybe that was true 5-10 years ago. Now we have a good 5 languages with maybe 40 frameworks that run less than 3x slower than C. That's really really fast. Enough that you could run anything under top 500 sites with a single machine.

Static CDN hosted react/angular SPA with a speedy backend can serve a stupid amount of traffic even on AWS free tier.


As an Enterprise SEO with over 15 years experience I don't really give any F&%!'s about your favourite language

What I need is something that is faster than the competition, is properly built to minimise usage of our crawl budget and is flexible enough to add new content quickly can properly handle site migrations can implement hreflang for say a dozen locales.


Reasonable requirements. Scary to think the majority of developers don't know anything about performance. The majority I talk to don't even know what http2 is!

Super popular REST libraries like Flask, Django, and Rails don't even support it!!

That's why choosing the right language and framework is super important from the beginning. It's easy to choose something slow as mollases and get left in the dust with constant performance problems


disagree with the premise of the article.

performance is a technical issue, not a moral or ethical one.

the experience that is derived out of poor performance might be classified as a moral/ethical issue (emphasis on might - a lot of apps use 100% of cpu, but your experience isn't affected, so it doesn't matter - in other cases your experience is affected so it matters), but poor performance by itself i think not.

i might be wrong on this, but i didn't find the article compelling enough to change my mind. any other opinions that might change my mind?


The ethics come from exclusion.

If a reasonable device(ie cheap & modern) is unable to load said website, then they are stopped from using that site. If you are "poor" to use a term, then it might be that your phone is your only way of interacting with said service. (libraries and other places are not an option due to cost of getting there, or lack of time.)

You are effectively saying that you must be this rich to use our stuff.

Now, if you're bently, rolex or similar thats basically your whole way of life. But if you're a utility that provides a monopoly, then its morally dubious. Especially as things like paying bills over the phone, or by post incurs an extra cost.


my argument is that performance doesn't matter if user experience isn't affected.

if it is, then sure, performance becomes a moral issue.

but if there is no effect, then poor performance simply becomes a technical issue.


Sure, but the entire article is about how experience is widely being affected by poor performance.

I'm not sure what point you're trying to make outside of pedantics.


poor performance is not a pre-requisite of bad experience. in reality it's a combination of UI & UX that contributes the most to bad experiences. not poor performance.

so don't focus on poor performance when there is no effect. and even when there is a marginal effect, UI & UX matter a lot more and could even negate the poor performance aspect.

so my point: UI&UX > performance. almost every single time.

all of this is based on various analysis i have done for a whole slew of clients dating back to 10 years ago.


_The_ tangible example put forward is that it takes 60 seconds to render a page, and that phone is only a year old.

If you wish to argue the semantics, rather than the subject of the matter, I'll take that as your tacit agreement that if you can't use a website, that's generally bad.


This is not an ethical issue, this is a capitalistic opportunity. The company who starts writing effective, clean and fast websites will gain more traffic = more money.

There's nothing more frustrating than waiting for a website to load, only to have the button you know you want to press suddenly get pushed down 5 cm by some interactively loaded DOM element.


There are lots of other problems piled on web development that is strictly not their fault. The aging network infrastructure hasn't seen updates in sometimes decades, despite billions and billions of tax money poured into it.

You really should be able to get more than 3mbit/s in 2019. Pictures should be allowed to be more than a handful kilobytes in size.

Datacaps are a ginormous moneygrab with no basis on technology. Abolishing corrupt corporate brotherhoods would do much more good than inventing yet another band-aid compression/surveillance point to the mix.


You can't easily improve the latency. My latency to EU is 100 ms, my latency to US is 250 ms, so request-answer is 500 ms. Make two request-answer cycles and that's already 1 second. And TCP already have some cycles. Yes, I have 100 megabits (which is not easy to utilize, because of TCP algorithm), but that does not help. And that's not only about size.


The number reported by tools like ping is round trip time -- it already includes the return trip.


You can make a rich, animated, and engaging website for less than 200kb.

Most devs have big stonking macbook pros, with decent network connections. So having a 4.5 meg wedge of JS is a perfectly acceptable tradeoff.

Much as US infrastructure is crumbling, You can't just say "Nahh, I'm not going to sacrafice my time to make things fast for the end user. I need these 1400 modules and full motion video."


You definitely can, and that is being done at the moment. I'm reasoning that you could be doing so much more with much less effort and cost if you didn't have as stringent requirements. We should progress in all fronts, and not simply pile everything on one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: