The author complains that developers argue about their tools and methods, insist on their favorite techniques, but produce (more) worse apps. Particularly for the amount of time and effort expended on developing the apps.
The thing is: the debates aren’t naive. They’re exactly trying to workout the details.
When someone says, hey forget postgresql and just use SQLite, and someone else says good luck if you hit n rows, and someone says it can easily handle n+m rows with x memory …
Well they’re trying to solve the problem of (more) better apps with their limited resources. That’s essentially the problem of development.
Asking people to solve development without discussing the tools and methods is to misunderstand the nature of the discussion.
It’s like telling authors to get on and write better novels and not waste time with their opinions on character and plot.
When someone says, hey forget postgresql and just use SQLite, and someone else says good luck if you hit n rows, and someone says it can easily handle n+m rows with x memory …
This is a good example of how devs get things wrong. The argument between which database to use because you might hit the row limit has zero benefit to the user. It's devs wanting to avoid future work to migrate from one database to another; a problem most apps will never actually see.
Maybe most apps never see the problem of hitting the row limit because the devs spend time beforehand thinking about which limitations they're likely to hit.
It's definitely possible, but the limit is 18,446,744,073,709,551,616 rows, or 281 terabytes, whichever the database hits first, so you probably could put the discussion off until year 2 or 3 if you really had to.
Kind of like maybe the 'Send your friends money, divide the restaurant bill easily!' app founders first think will change the world could get really popular if it gets on the frontpage of HN..
I have a feeling you will long before experience other problems that makes likely you have to migrate to a different storage system before you hit that limit.
I work on such a system and one of the things we do is figure out which of that sensor data to throw away. In most cases the change between measures is too small to be worth saving, and because we have less data saved we can analyse it faster and thus do something useful with it. The purposes of all that data isn't to fill storage, it is to allow making useful decisions based on it. If you can't make a better decision based on sensor input then don't save it.
Long before you hit any row limit you will have performance problems because you have far too many rows to process in a reasonable amount of time. If you think about this problem and ensure you don't have too much data to process the row limit is so absurdly high you will never hit it.
If you choose a 32-bit integer for your PK it can only address about ~4 billion records. That was true 1000 years ago, and will be true forever until a black hole swallows everything and changes the laws of reality.
Forget how many angels can dance on the head of a pin, the real question is can God change the value of pi, or can God make 32 bit integers larger than our current max? In both cases what it means to math is more Interesting than any yes/no.
Choosing to do nothing is still choosing to do something. In choosing to do something, one makes an informed choice. How informed that is is up to you. Row limit may be one, or performance or feature set may be another - say, choosing Postgres for some data types and easier indexing.
Making informed choices are of benefit to the user. How much benefit depends on one's opportunity cost doing X instead of Y. Choosing not to do X is still a choice, an informed one or an uninformed one.
Or to step away from choice of database, take privacy. I see an immeasurable number of things on Show HN that take user data but don't have a privacy policy, if they have one it's often copied and pasted for somewhere to placate Mailchimp or another 3rd party service, and very rarely is it actionable.
Not launching a product is of course problematic for a user. Launching one that doesn't confirm to norms and rights a user expects will also be problematic. Should we not launch? How informed should your choice be? California State law, GDPR? How about Malaysia, or South Korea?
There are rabbit holes one can dive in to. There are reasonable lengths to dive, of which zero should not be a default case. At least have a look inside. Solving a problem isn't rushing something out full of uninformed choices.
There are reasonable lengths to dive, of which zero should not be a default case.
The fact you're even aware that there's an alternative to choose means you're already slightly informed[1]. Being a bit more informed so that you know a solution will actually work is fine. I am not suggesting you shouldn't look at alternatives. I'm saying you should look at alternatives from the perspective of the solving the problem you have rather than arguing about pointless trivia that won't ever impact you or the app you're building (like row limits).
[1] Every web dev who used nothing but MySQL for 15 years should be crying now.
> from the perspective of the solving the problem you have
Completely. A small planning goes a long way, and helps define what problem is being solved. This is a particular peeve I have with lauding of pivoting. The plan didn't work out and you're now doing something else? That's great. At least something was gained and you're looking forward. But don't design for it. In other industries this goes by the perhaps less jargony terms salvage or wasteful production. But putting salvageable and waste in the description of a product solution doesn't sound as nice, and nor should it.
Web devs are typically arguing which produced assembly is better. Granted, we have come a long way, as the typical react site gave up on terse html a long time ago.
But, consider, dream weaver and early design tools were ridiculously advanced compared to what most of us are creating in web sites. Flash tooling was a whole other level.
This is assuming that software quality is purely a function of the software engineer’s mind and creative ability, almost like art or writing.
Many - most? - software devs know multiple ways they could make their apps higher quality, but they just don’t have the time or ability right now, or they have other priorities, or so on.
Better tools can, and do, enable engineers to simply do more than they otherwise could, and sometimes “more” actually does mean “better.”
You're making another important point.. Native have had fairly good tools for making GUIs, since like.. what.. 1991 ? Simple nice tools where you define your windows and drag the widgets onto them where you want them to be.. then hook up code to it... Web.. it's still stuck in the 80's in all the worst ways.. using a f*cking typewriter to define your UI.
Life got more complicated since introduction of smartphones and tablets - it's probably order of magnitude harder to do decent mobile app than desktop one:
1) you have smaller screen estate and have to split some forms into multiple page wizards ( on desktop if you lack space you just popup yet another window wherever it popups)
2) you have to support different screen orientations (portrait, landscape)
3) you have to support corners cases like notch or foldable screens
4) you have to support different screen sizes and different screen aspect ratio (on desktop still many apps don't allow resizing and its ok)
5) you don't have precision input such as mouse where you can hover on some small button and see via tooltip what it does.
6) you don't have physical keyboard instead a virtual keyboard that on appear eats almost half of your already screen estate and covers other form controls.
7) mobile OS make limitation what you can do in the background, how much memory you can use
8) mobile OS don't allow just dragging windows and resizing, and switching windows easily using native UI
Yes, this puts me solidly in the niche, I couldn't care less about mobile or tablet.. They're fine devices for.. I don't know really, making calls and scrolling facebook.. But for anything with writing involved, or.. input in general.. for any period of time.. I'll never be that hip, and I get that I'm "in the wrong", but that does not change my opinion that what the world needs is more stationary workstations with large desks, multiple large screens and good keyboards and mice.
I know it's not how the world works, but I wish, if we could just show "them" how good the programs can be, how efficient the user interfaces can be, how awesome it is to use a real proper computer with real proper computer programs, instead of the toys on tablets... But I know I'm in the wrong.. I know it's "just me" and everyone else enjoys their glass plates. grumpy old man shows himself out
I'm fairly sure you're preaching to the choir here
Worrying about making it work in mobile is an external requirement that arises due to the world we live in, not because of funsies. If someone could say "fuck supporting the shitty options" without bleeding a metric-fuck-ton of money for it, they would
I assume you mean any significant amount of time. I often write something quick on mobile, which works okay. However if it is more than a sentence or two give me a keyboard. The mobile is with me all the time though, my nice desktop with a full sized keyboard, large monitors and all that is back at home. Thus the compromise of often using mobile to write even though it is overall a worse experience.
Wouldn't be as bad if making a GUI in a webapp was more intuitive to begin with. But CSS and HTML have a decent number of unintuitive quirks you need to experience and mess around with before it 'makes sense'.
Then the best most can do is two views (mobile and desktop) which adapt to some various widths in an obvious way, using loads of whitespace as a solution.
What makes you say that better tools don't make better apps? A screwdriver changes removing a screw task from "extremely hard" to "extremely easy". Same way, a web framework makes writing a secure web server substantially easier than starting from scratch.
This is surprisingly not true, have you tried ?
When you start from scratch you realize you have a tiny minuscule surface and securing that very small hole is much easier than say... fixing the log4j 20th level transitive dependency of your giant Spring Framework that was secretly able to interpret random input from logs or someth.
Especially if all you wanted was to show a quick html output. It's incredibly common to see giant dynamic web stuff used for tiny static output that could have done without.
I've done both in large volume websites and I find now that using as little "framework" as possible and a limited number of "libraries" works better in terms of cost, bugs, dev attrition, business desire implementation and user speed of render.
Picture a giant mess of node dependencies surrounding a central giant quirky framework like VueJS. 5 years of dev, all devs left. You need to maintain that, is that going to be easy ? Wasn't for me.
It's why when I have greenfield I do vanilla web components, with a little lit-html sprinkled in for fast rendering.
I've seen so many developers complain about how complex vanilla web components are, and they'll turn around and do React / Vue / Angular and Redux or (shudder) rxjs, etc...
I just don't get it.
Like, if you need reactivity... Stick a render call into the setter of a plain old property.
If you don't, use a variable.
The whole computed() thing with Vue makes things so complex... I have to deal with a huge rabbit hole of that stuff all the time.
I think to myself... I could just use a property, and I wouldn't need a whole painful build step, type definitions, framework version specific DSLs, massive complex source maps that don't work, Rube Goldberg state machines, crazy pseudo DOM elements that aren't really DOM elements, unless they are, or a ref to something magic that might end up being a DOM element someday, obscure dependency injection cruft that's done way better by just using dynamic imports...
I remember when I couldn't wait for Proxy to be widely supported by browsers.
Now I'd do anything to be able to shove that toothpaste back into the tube.
I don't know.
I don't see any of this providing any value.
I feel like the emperor has no clothes, but everyone else thinks he's wearing Armani.
I am in front end webdev, and a lot of time I have had design colleagues that came up with crazy paid solutions for something as simple as a pop up. When I overheard and mentioned I could implement that in 2 hours tops it was a bit of a no brainer. The pure design folks often don’t know the tools or what’s possible.
To this day I just do mysql/php/js and simple ajax calls for a lot of solutions. Most clients are in the 100k-500k of hits and it works fine.
A large "ecosystem" is touted as a great advantage of some frameworks, but the reality is that it's mostly unmaintained low-quality libraries that could be replaced by a few lines of easier to understand code. "The Ecosystem" is about quantity over quality, all the way down. And when those unmaintained libraries break, you're also in the hook for fixing them, but it takes way more time and coordination than fixing it if it were your own code.
So just because you can remove screws more easily you build better shelves?
You can build bad apps with good tools and good apps with bad tools. Just because it gets easier for the web dev doesn't makes it easier for the user and vice versa.
> So just because you can remove screws more easily you build better shelves?
Yes, with a high probability. Try to put all your screwdrivers away and then try to build a shelf without them. Not only will you need lots of time just for fastening the parts that you cannot spend on thinking on a more optimal layout of the shelf, making the result worse -- you will also judge every layout idea on how hard it will be to build without a screwdriver, instead of judging it on whether it is useful as part of the final shelf.
> You can build bad apps with good tools and good apps with bad tools.
This is a (probably unintentional) strawman. The argument was that tools make the task substantially easier and therefore improve the quality of the end result, not that they make it possible at all.
> Just because it gets easier for the web dev doesn't makes it easier for the user and vice versa.
Again, the argument was that tools make it easier to build a good product, not that tools themselves guarantee a good product. Of course with good tools you can build a bad product, and you can even build it easier than without good tools.
The argument "tools make the task substantially easier" was already a strawmen because the article is about bad UX with existing apps.
And web devs already have "screwdrivers" but the apps don't get better.
Maybe it isn't the tools because there are great apps on the web,s some of them even without many tools.
Maybe it's time to stop blaiming the tooling and focus on the developers and their environment.
I'm not a bad painter because of the wrong brushes, I'm just not good at it (yet).
I don't think the argument was "screwdriver" vs "no screwdriver". It was about some tools being better than others.
Everyone in web dev uses tools of some sort. The argument might be that some tools are so advanced, it's like using a robot arm for tightening a few screws. Some devs will embrace robot arms, others will use a screwdriver.
>So just because you can remove screws more easily you build better shelves?
probably you build the same quality shelves slightly faster, being able to build things faster allows you to learn faster, by not focusing mental and physical energy on the relatively stupid task of removing screws you have more mental and physical energy to focus on things that maybe actually help you build better shelves.
A lot of the badness of the apps however is somewhat business driven, dark patterns etc.
you're making a point here and the counterpoint is that building good apps with bad tools requires extra effort that costs money that the business owners don't have because the market won't support it (ie nobody is willing to pay more for higher quality stuff - you need to figure out how to make high quality easier)
But the whole point of the article is that the apps are bad no matter the tools.
One should first know how to produce high quality in the first place before thinking about how to produce it more easily.
All the brushes in the world won't let me paint like Leonardo da Vinci.
I don't disagree but quality tools are about raising the floor, I haven't had to worry about a ceiling falling on me my whole life because fastener technology is very good
I specifically mentioned screws to highlight the fact that it's a shelf with holes for screws already there. Either way, using glue would remove the possibility of disassembling the shelf for moving, which is a downgrade from my perspective
In the history of construction, better tools have not improved quality of output. People in the ancient era made amazingly fine furniture. The finest furniture made today still uses techniques similar to the ones used 300 years ago.
Similarly, I doubt that better web development tools improve the quality of apps. It likely just makes them cheaper.
There are good tools and bad tools, we just tend to forget the ones we depend on the most. Let's try to write a web page with concurrent access without a database, or let's try to write a web server using C or Assembly and no libraries, because we don't want to rely on tools.
Our industry is built on layers and layers, every layer is a tool to the next layer, how can we claim better tools don't make better applications?
Sure, it does not guarantee it, because what the tool does is raising the baseline, however raising the baseline gives you time to push the top higher.
Let's take the web server example: The most commonly used web server is nginx. Written in C (possibly C++ now?), with no libraries except Linux headers.
The layers are there to make developers more productive, not to make code quality better. In fact, layering is probably the most expensive thing that computers do today.
Let's do the music one: how an instrument feels (to you) is intricately linked with how well you can express yourself musically.
I guess there are two different view points here, those who think the things used to create something have no bearing on the essence of the thing that is being created and those who think that you cannot divorce the two.
Isn't that just a discussion about platonism in disguise?
To a point. I don't need to care whether you wrote your novel on Word or a manual typewriter, by the time it gets to my Kindle or a printed paperback that doesn't matter.
On the other hand, if your web app takes forever to load because of a huge React payload or poor server performance, I do care.
I can't tell you how many snappy, easy-to-use React sites I've used (I was busy using them, not reviewing their code), but every single time I've said "What the hell is going on, I'm just trying to [simple operation which is the only purpose of the website I'm on]" in the last five or so years, it's some React/Angular/SPAFrankenstein monstrosity.
The framework becomes part of the house, or cupboard. A screwdriver does not become part of the house.
That is the semantic difference between tools and frameworks.
This doesn't mean one cannot ever use a framework as a tool, or a tool as a framework. But I'd argue in those cases one stops being resp. a framework or tool and starts being a tool or framework.
Maybe the word 'tool' is just not adequate and there is no 1 to 1 equivalence here. If you come from woodworking, the saw or drill doesn't end up being part of the table you build whereas the framework code is part of the actual product you build. Or in other words, if you remove the framework after you programmed your solution the solution stops working where as if you remove the drill and put it back into your bag the table still stands. I would argue there is a categorical difference at play here.
So maybe words like tool, scaffolding (and even framework) are just not entirely up to the job of describing what a 'software framework' is and ultimately lead to confusion.
Let's say you use a basic HTML Golang app backed by SQLite and replicated to a secondary with Litestream, all running on cheap bare metal ARM servers, and therefore your website is simple, usable, never goes down, and costs $20/month to run. I have no idea if this is actually good, but let's agree that there's some idealised system out there that would be a lot better than what we have now, even if it doesn't look like this one.
Let's also say I use React and Redux backed by a CQRS Express app running on MongoDB and Kafka, all hosted on AWS, and (for the sake of the argument) let's say this causes my website to be broken in bizarre ways, down all the time, and costs $2000/month to run. This is definitely an exaggeration and some of these technologies are great, but let's agree that there are some hellishly over-engineered systems out there, even if they don't look like this one.
I think the argument is that what matters here is having an example of a $20/month uncomplicated beautiful system to point to. Discussing the technologies or patterns involved without building anything and especially without building a complete working product with your favourite elegant tool means everyone will use the broken $2000/month system as an exemplar in your field, because they have nothing better to point to.
>Discussing the technologies or patterns involved without building anything and especially without building a complete working product with your favourite elegant tool....
But surely, if anywhere, this is the place to talk to people who have built complete working system, are building them, and intend to build more of them. This isn't a forum for a bunch of undergrads chatting about stuff they just learned in college. I'm sure there are some of those here, but if you're going to find seasoned web engineers anywhere, this is a good place to start.
> Let's say you use a basic HTML Golang app backed by SQLite... therefore your website is simple, usable, never goes down.
That's a complete non-sequitur. If the app becomes hugely popular, needs to serve massive volumes of data and sync across thousands of instances, well, it won't. You can't say that because it uses simple tech therefore it will be wonderful, that's not how that works. There are use cases where the MongoDB + Kafka thing will work much better. That's what these discussions are about.
> If the app becomes hugely popular, needs to serve massive volumes of data and sync across thousands of instances, well, it won't
This argument is so overused for justifying any kind of complexity that it would border on comical if it weren't causing so many issues in our industry.
There is absolutely no guarantee that the MongoDB + Kafka will also work with the hypothetical "massive volumes of data" you're talking about. Those two tools aren't set-and-forget: they require fine tuning and a good architecture to begin with. Ironically, I've seen plenty of cases where MongoDB couldn't handle the production load and had to be put behind some Varnish or Nginx disk cache (ironically bringing it closer to the SQLite app example).
But we don't even have to go that far: depending on the situation, there is a huge chance that the hypothetical app won't see "massive amounts of data" anyway. And if it does, there's an also massive chance that it won't happen overnight, so there will be time to build more. And if we get there, the SQLite app can just be bumped up to PostgreSQL or something similar, which scales vertically beautifully, or some other relational DB that scales horizontally.
Right, but if you're scaling it up to meet changing requirements, well, that's now a different use case. How do you do that scaling up effectively? That's where discussion here comes in.
I'm not at all arguing that you need MongoDB and Kafka, those are not my examples, but pretending that SQLite and plain HTML are the be-all, end-all is bizarre. OK, you're a big fan of PostgreSQL. Cool. Now we're talking. In what situations and why is a useful discussion to have. That's grounds for fertile and relevant architectural debate, and here is a good place to do that. Arguing that isn't the case is, well, ok... problem solved then. I suppose there's not much more to talk about.
I don't think GP is claiming that SQLite is the "be-all, end-all" for all cases.
But it definitely is for when a website doesn't need to scale, which are the vast majority of cases. Disks (or rather SSDs + memory backed cache) are extremely fast. For cases when it doesn't work anymore, there's an easy migration path to whatever relational DB that I mentioned.
The argument isn't that SQLite is the "be-all, end-all", it's more that MongoDB, Kafka and other tools are the opposite of it: they don't hit a sweet spot for small sites (and also often doesn't for growing ones, or or big ones too), so there's no much point in starting there, as you'll only get problems and might hamper the website growth.
It doesn't need to sequitur. You disagree with the (unimportant, arbitrary, and probably wrong) choice of "good" technology, for the (valid, fair, and important) reason of scalability. This is fine: whether a specific technology is fit for a hypothetical purpose is irrelevant to what I believe the article is saying.
I don't read the article as saying "never talk about the technology behind scalability on HN or anywhere else," I read it as saying "people learn scalability from exemplars, so the way you improve overall scalability is by finding better examples, not discussing what's already out there."
The reason we have use-cases of Kafka scaling is because LinkedIn[0] needed durable distributed pubsub, so they built it. When the linked article was written it was handling 500 billion events a day. That was kind of cool, because 500 billion is a big number, but also kind of shocking, because that was 68 daily events for each human being alive at the time. Perhaps in 2022 with the hindsight of the last decade, someone with different problems and priorities could approach scalability from a different angle and come up with something else that works better for them.
Everyone is going to miss the point here and go down rabbit holes about various technical bits and that is exactly the point: developers spend lots of time debating everything
The limited resources really doesn't matter. The tech choices really don't matter. Pick a prominent MVC based web framework, a RDBMS, some front-end libraries and go build the thing. Lean towards picking stuff the team has experience using. Then if the web app gets heavier traffic, horizontally scale the web tier and vertically scale the DB to your companies budget. If that amount of scaling doesn't suffice - do the research as you approach that level of traffic not when you have zero users.
When it comes down to standard web apps there are known patterns and known solutions out there to solve the majority of the problems. Reaching for a new/shiny thing to use is fun, learning new development techniques is fun but its rare either of those is going to cause a fundamental shift in solving problems with web apps. We have been using (abusing?) HTTP/HTML/JS/CSS for 25+ years. For small web apps I bet people could solve things with simple PERL/PHP cgi-bin types of scripts today with current versions of HTML/JS/CSS, albeit more securely than in 1999.
To use a building a house analogy, sometimes its like all the carpenters are standing around for days talking about what tool is better to frame the house, one uses less nails, one is faster, one is a nail with wood glue that will expand at the joint and provide less squeaky floors (I made that up). None of it really matters, you're being paid to frame a house, not do research - just build the thing.
> To use a building a house analogy, sometimes its like all the carpenters are standing around for days talking about what tool is better to frame the house
The way I see it, modern web development isn't about debating which wood screws to use or which lengths of wood... it's arguing about which pre-fab construction to use (a framework), and whether it's acceptable that if you want to build a KitchenObject, you have to start off with a RoomObject and add PlumbingFixtures, but you end up with a ClosetChild in the kitchen because it was inherited with the RoomObject.
Meanwhile I'm just here building with 2x4s and wood screws, watching the debate rage on :)
Hmm. Antipatterns do matter though - red tape abounds and is rarely conducive to good apps. Don't, e.g., bother to write comments for all your functions, modularize everything, or insist that every line of code must be tested.
Not that those things can't be good in small amounts, 95% of modern web dev is busy nonsense like this, and it shows. Tech decisions are another source of waffling, to be sure. There has been more than one rewrite of a product to a worse product, to a product being again rewritten, or practically rewritten, because the rewrite was bad.
Yeah some of it is management sticking its nose where it doesn't belong, but yeah, I think the point is also that devs argue about mostly what doesn't matter, instead of focusing on what does matter.
> Pick a prominent MVC based web framework, a RDBMS, some front-end libraries and go build the thing.
It's really not that easy. At very least the framework you use needs to be well thought-out so you don't run into roadblocks later, and your application may very well lend itself to certain technologies that won't be obvious if you don't spend some time thinking things through and understanding tradeoffs.
When constructing a building you don't just choose a material, a layout, etc and get to work. There's significant effort in understanding the environment you're building in so you can inform your choices. Buildings have collapsed because the soil conditions weren't well enough understood.
The problem of which DB to use is one that can be figured out ahead of time with even modest attention to more rigorous engineering practice. But that’s not what webdevs the subject of this article do. They play guess-and-check games like they are 15th century engineers building a bridge, because they have hacker mentalities instead of engineer mentalities.
Not to mention the cult of celebrity and fad following in the industry.
The analogies to civil engineering are strained. Sure there is some truth to what you are saying, but also requirements for civil engineering projects tend to be more easily stated at a high level and are depending on the unchanging and unforgiving laws of physics plus some slowly growing list of available materials and construction processes. Everyone understands intuitively that bridges are expensive and you can't just change requirements on a whim without potentially blowing up the whole project and starting over.
Software on the other hand can be made to do anything, we can change it any time, and the technical requirements for scale and functionality are hard to predict ahead of time. Trying to apply civil engineering mentality to software results in a process that is too slow and gets beaten by hacker mentality every time.
Software is also limited by the laws of physics, which is the actual hardware (silicon) it’s running on. Things like CPU clock speed, IPC, memory latency and throughput are all constrained by things like the speed of light, capacitor physics, current status of silicon technology and manufacturing, etc.
So when a web app performs sluggish on commodity devices, it’s because the devs haven’t put thought into the actual physical constraints of CPU and memory. It’s just plain bad engineering.
(You might say that the whole browser DOM/JS model is at fault, but then that’s bad engineering for the Web spec designers. To be fair though, things like the DOM and JS were invented about 3 decades ago at the time when computer performance was increasing so fast that people really didn’t think about these constraints seriously. Nowadays things are different, and these initial design decisions are biting us…)
We can tackle so-called "essential" problems forever & make no progress actually making the web better though.
This post impacted me & resonated strongly: it's a great notice to be more alert as to whether we're talking about details or something core & significant. Devs can have these discussions deep in the weeds, but it's kind of a farce & disservice when the petty implementation details flood & overrun the topics of what we're making & how we've enriched the media-form.
>I’m hard-pressed to name more than a handful of web apps that I would say are genuinely good with no major flaw.
>Content websites are a different matter. A static or server-rendered content website with adequate typography and semantic markup is generally going to do its job pretty well.
I feel the exact opposite. Most web-based applications are pretty incredible these days.
It's the "content websites" that are insanely bloated and over engineered (and unnecessarily turned into "apps"). E.g. news sites, new reddit, recipe sites
Airtable, Draw.io, Google Docs, and Photopea are all incredibly well-made web apps that are essentially complete products without any major flaws I can think of. Those were just the first four to come to mind, there are countless countless other wonderfully engineered full featured applications which run totally in your browser. It’s actually a bit of a marvel, when you think about it.
There are great web apps and there are great native apps. I think what the author is asking for here is just “better software” and well, wouldn’t we all love that :)
It’s so bad now that an ad-blocker is rarely sufficient, because even without the ads every other paragraph is broken up by some autoplaying video or some callout to a different article.
In some cases I genuinely struggle to read an article without reader mode or an RSS reader that can scrape the full article.
It's been years since I tried to read an article without reader mode. Reader mode is fast enough and accurate enough now that there's no need to bother not using it.
> Everybody seems to have an opinion on how you do your web dev and how websites and web apps should be structured.
It’s true and it leads to many annoying unproductive arguments, IMO the reason is a fundamental lack of a scientific mindset in evaluating techniques and tooling. We don’t have theories or testable hypotheses, we have opinions.
On one level I like and agree with the author’s idea that results are what really matter. As an engineer trying to achieve outcomes for employers, clients, and customers, this approach has served me well.
As a community of professionals, it would be nice to see more effort to work towards rigorously proven conclusions about what works best and why.
When someone touts a paradigm, technique, or framework as the best, without any real data or theory as to why, I think they should be told to provide evidence or withdraw their argument, and I’d say that goes double when it’s the framework or technique you like or use. IMO, these shouldn’t be fights about tech, they should be ones about quality of discourse.
It's shocking to me how little data there seems to be to support what many consider to be "best practice". About the only real data point I see thrown around is that bugs per LOC being fairly constant. Thus, you get people saying long, two-page functions are bad and should be broken up. But never have I seen someone backup such an exhortation with data. Let alone more nebulous rules like "objects should be open for extension and closed for modification," or anything out SOLID, really.
It is there. E.g. The book Accelerate! by Forsgren et.al. explains, scientific, what methods and concepts produce better and worse software. In an accessible way.
There are numerous papers on how TDD does X to your software or team. Numerous theses on how static typing causes more or less Y and so on.
Yet here we are, ignoring all that and discussing the pro's and cons of TDD based on a YouTube video, a blogpost, an anecdote and mostly a giant pile of ego and opinions.
Do those papers all agree?
And what's the right course of action if, for instance, everyone in your team hates using TDD and threatens to resign or switch teams if you attempt to enforce it? Though surely it's more likely that something like "TDD" isn't an either-or proposition, and even if the evidence clearly shows the effectiveness in particular scenarios or when done in a particular manner, its not a given that more TDD is always better, and hence there's still benefit to discussing the pros and cons.
Off course they don't agree. Though I have asked numerous times to come up with scientific research or arguments that prove the science in e.g. Accelerate! wrong, no-one, not even the strongest opponent of CI, TDD or such, ever has produced that.
Instead, people come with handwavy nonharguments that "in this particular scenario it is different". To which I neither agree nor disagree. I only have proof that it does apply, even to scenario X. It's up to "you" to then, scientifically prove that it doesn't apply.
Point being: again: we should have discussions about tabs, spaces, TDD, CI, lean, agile, microservices based on scientific research. In a scientific manner. Not based on anecdotes and personal feelings.
Sure, but unless someone's taken the time to do the studies, anecdotes are the best data we have. And I'm not entirely dismissive of personal feelings either, when they're formed as the result of years of experience exploring different techniques. Especially because I tend to feel there's more to software development than just "engineering", and there's a creative side to it that's never going to be completely reducible to whether technique A is measurably better than technique B. Certainly in a tech lead type role, being aware of how those in your team enjoy working individually while still sticking to basic principles and guidelines that allow the team as a whole to produce quality code in a timely fashion is one of hardest balancing acts to get right.
But when there's clear and accessible science proving those anecdotal and personal feelings the ball is with "you".
It's now up to you to show some scientific method, data and theses, to prove the other science wrong.
Or, in other words: if science and "personal feelings" disagree, it's very obvious what should be considered the right take. Until the time that personal feeling is proven right. Scientific.
In the olden times there were scientists actually researching project management, measuring, changing inputs, performing experiments, probably at IBM or NASA.
We don’t do this anymore, because IT is now less costly, and sometimes doing the thing is faster than writing the plan for it. So Agile won. Probably deterministic methodologies got stuck in a local minimum, which was ridiculously expensive to run compared to Agile. I still have my book of IBM RUP process.
Sorta. It isn't a lack of a scientific mindset. I argue it is a premature quantification of data such that you can objectively look at it in a scientific way.
We are taught that you can compare things objectively in most school level science classes. This leads folks to thinking their perspective is valid scientifically. Especially if they can put numbers to it. After all, isn't that native empiricism?
I'm in complete agreement with you. At this point, I also endorse threats of flogging for someone who tries to introduce a new framework to a project after the actual work has started.
And actual flogging for the management team if they accept that framework.
As a data point I've just joined a small project which already has a JS framework (Vue) and 2 build systems (1 broken) and the developer has just added React and a third build system. No talk of migration - they're going to run in parallel.
The migration is often also a source of problems in itself.
There's nothing wrong with neither Vue or React, but adding a second one of those for no reason deserves the flogging thing that the grandparent poster is talking about.
Build systems... well they're a dime a dozen and in Javascript all build tools kinda suck (yes, including Vite), but still no excuse for having two.
Curious why you suggest all JS build tools suck... previous company I was at had .NET, Java (backend), golang, iOS, Android and JS (react) builds, that all compiled, ran unit tests, packaged and deployed to relevant targets. The react build was the least problematic by far (though we did get bitten on upgrading the host OS which upgraded node to a version incompatible certain older packages being used, but it didn't take too long to fix). Accepted, there was probably a little less functionality in that component than the mobile apps , but iOS was far and away the most problematic, FWIW).
"Suck" doesn't mean "worse compared to others". It means it has problems by itself that I would prefer not having to deal with. Ideally we'd need no build tools for JS. Except for JSX, modern browsers are mature enough. Other languages build tools are often terrible too, the only one I actually enjoy is probably Cargo.
As for the real answer: for trivial stuff, all build tools work fine. For anything beyond that they still have varying degrees of problems. You seem to have encountered those in iOS but not in the JS tool you used. An engineer working on a super vanilla iOS project but a complex JS project would probably say the opposite.
Vite works quite well for most part, but there's still quite a lot that can go wrong with it. It depends on two other build tools (esbuild and Rollup), so some otherwise trivial configurations are difficult and prone to break with upgrades. But for happy-path stuff it works perfectly.
Most of the time a vanilla configuration is fine, but sometimes you have special requirements (sometimes due to vendors or specific technologies) that the younger tools aren't good for, while older tools have their own issues.
What's a language for which you don't think build tools "suck" then? Btw another data point is the C++ open source project I contribute to regularly, where I continually have all manner of problems with local debug builds (the CI builds are generally OK except for being unbearably slow, and it's a single desktop application!).
Amusingly, I'm using Cargo and CC for a large C++ project I have. It's unholy and certainly wrong, but works beautifully and is not as bad as every other C++ build tool I have to use.
This is a wonderful comment! There is missing descriptions of things, the quality, the metrics to use when describing such things and we, the users of those said tools end up spending a lot of time going through bad documentation and learning things by practice. For a trade that pushes the digital frontier to everything it touches, the learning is surprisingly analog.
Read the book About Face : Essentials of Interaction Design, it’s a great resource on design and make projects around interaction as the title suggests.
And still a lot of devs I have met haven’t read About Face : Essentials of Interaction Design. It’s a must read in my book to structure around a project. You don’t have to copy paste it’s ideas, but using its core features makes a project a whole lot smoother for both end user and devs.
I think the bad “web” has a lot more to do with the business side of things than whatever development tools you use. If the website is free to use, it’s either a passion or community project or it sells you. Sometimes that works out sort of well in places like HN, and other times it works out less well like on the infinite amount of shitty news papers. But those news papers can be shitty in all technologies even in the physically printed form.
I do think we need a better tool for finding the passion projects. Because search engines have basically become the “portals” they replaced by curating search results into being about herding us toward sales, but I’m sure we’ll see a shift sooner or later. It’s sort of already happening with more and more people returning to things like news letters instead of reading blogs.
> I do think we need a better tool for finding the passion projects
I like the idea! I wonder though how that would look like. How do you imagine such a tool? Would it be something like a "directory" from back in the 90s where people submit their link under a specific category?
> They keep encountering highly flawed websites or web apps and are assuming that regular web devs can do something about it.
This is the reality that has been grinding me down of late. Nor do I find it limited to only web stuff. I see it in embedded and other market areas.
I was excited over the last few decades to see the field of programmers swell. I have always loved the craft of good software development. I was so excited to see so many others also get into it and naively believed that many more people participating would lead to a larger pool of ideas to draw from and everyone would get on board in some sort of utopian democratic industry wide kumbaya. During the rise of open source, I even felt it was really happening.
Absolutely! I could likely find the issue but it isn't obvious. So the complaint remains: for this extremely common and predictable use case, why is it not more reliable?
The main point is that problems like this are spotted everywhere in daily life. Time delays without apparent reason, questionable UI choices.
The complaint is not failure to work - they do work and are often widely used. But the experience is suboptimal for no apparent reason.
Similar to the article - whoever did this [latest annoying thing] needs to look at the "more better" example and just do that.
If the dev pool of 2022 was like the dev pool of (for example) 1994, just larger, you would have been right. But the software goldrush came and attracted many more who were just in it for the money or just weren't as thoughtful. Scale matters, upscaling people is hard.
I'm not sure the ratio is that different. In 1994 how many different browsers, connection speeds, phishing scams, competitors shipping fast forcing you to do similar to stay competitive, accessibility considerations, etc. If memory serves, it was dramatically less than now.
This thread of comments is kinda weird to me, there's like a romanticized view of old timers or something, and the problem must be the young, money hungry and unenthused. It feels like boring old internet gatekeeping and is not insightful.
There is a totally different world in tech now than in 1994. Tons of stuff and often optimized for mean time to recovery. I crave quality too (believe me) but I think shipping fast and breaking a few eggs (not directly but that's essentially what you're left with) is what is asked of engineers in 2022.
All of my peers seem to care about quality and we do our best but even in the days of buying software on floppy disks, bugs existed.
The scope of problems being solved is much higher. Adding a two or three libraries- jquery and a plugin or two- and not giving a toss about performance was indeed easy. Few auxiliary tools were required.
Back then, we all could have a pretty strong hand on the wheel of our websites & how they functioned, at a detailed level. It was feasible, for the scope was small. But now, we all use package managers, we all use bundlers, we all use minifiers, we all use production builds, we all use tree shakers. Any one of these concerns is a significant realm that tends to have significant people-years worth of work poured into it. The need for these capabilities distances the practitioner from the core truth of the media: we no longer see clearly what we are doing, we no longer have personal stake or control in the processes.
There's still a strong latent desire to re-simplify, to de-configure the web, to de-build it. We had hopes that EcmaScript Modules would help, we had hope HTTP2 Push would be a viable delivery platform, for un-tooling, but we never really made an earnest go at tackling the optimization problems that all brought in; works like Cache-Digest never de-facto materialized or got made-available. Front-end architecture similarly has been dominated by high-abstraction top-down frameworks, namely React, which complects in a wide host of assumptions/abstractions that distance the practitioner from the actual media-form. Just as ESM was a hope that we might get back to understandable, without lots of framework/tools, WebComponents too presents a hope that we can strip away a lot of the specific complexity & find a more inter-modular, general & understandable grounding. The web can & should be at a similar power level to where it is, more so, but in more honest, accessible, direct manners than the crafted-of-necessity deep layering we cobble with today.
> and not giving a toss about performance was indeed easy. Few auxiliary tools were required.
When the internet came through a modem and bandwidth was very expensive, performance was concern #1. So pages were kilobytes, not multi-megabyte monstruosities.
> When the internet came through a modem and bandwidth was very expensive,
That effectively ended in the mid 00s. Otherwise we wouldn't have had services like YouTube.
Between those times and around 2012 there was an explosion of script bloat because back then it was more important to have an interactive page than a fast page.
Bandwidth resurfaced as a concern only with the proliferation of mobile devices.
The whole premise of tree shaking, introduced in 2012 was to reduce bloat.
which of the techniques that i listed that are common now dlwere possible then?
we did indeed squeeze a lot out of the code we wrote. and we were very aware of js performance issues- using array.join religiously instead of concatenation was a big one. but most of the modern techniques & concerns simply werent appplicable or possible.
the shift from writing it ourselves to loading dozens or hundreds of modules came quite late, all told. that lead to the explosion of performance tools & techniques.
I've got a strong suspicion that there would be more productivity if the bottom half of developer (or more) left the industry. Control for seniority, of course. The majority of 1Xers produce outright bad code, contribute disproportionately to tech debt, don't read user stories thoroughly, don't test their code properly, and generally get in everyone else's way.
The most productive thing our industry could do is create an anonymous blacklist of devs to avoid and code samples of their shit work.
Really? I think the most productive thing the industry could do is get rid of developers who drastically overvalue their own work, methods, and ideas, haughtily dismiss the value of others’ contributions out of hand, and blame everyone else for a project’s shortcomings. You can usually ID them by their persistently rolled eyes, big fish in small pond mentality, and 1.9 septillion reputation on Stack Overflow.
you're both wrong. the most productive thing would be to grow up and help bring the better out in each other. create an environment thats about the skill. set aside ego and focus on building something wonderful. but doesn't seem that's gonna happen soon. (i dont really like bigtech, but the "blameless" culture seems like it could be very productive and save many a dramatic exits)
I wouldn't be so fast to dismiss the previous answers, although I think they're a bit snarky. There is indeed a larger number of developers that is necessary on lots of projects from influential companies lately, and this heavily affects the architecture and the outcome of those projects.
The goal isn't to "build X" anymore, it's to "build X but with N engineers", with N being multiple times the minimum amount needed. This affects how tools are built. They're built "to scale", and this trickles down to smaller companies too. This also affects how software ends up looking, meaning custom GUIs and widgets all the way, even when it's not necessary.
After working in a couple Unicorns where the investors asked for larger headcount, and a short stint in a FAANG, that's my experience.
Maybe there are a ton of unqualified developers in SV playing Wordle instead of coding? I have no idea. The vast majority of software projects aren't managed by FAANG or Unicorn companies. The financial landscape in those companies is completely detached from reality and not a good basis for comparison.
> Maybe there are a ton of unqualified developers in SV playing Wordle instead of coding?
Maybe there are, I also have no idea. But in my experience most engineers are qualified and hard workers, even when there is team bloat. Companies are hiring well. It's just that they're hiring too much.
> The vast majority of software projects aren't managed by FAANG or Unicorn companies. The financial landscape in those companies is completely detached from reality and not a good basis for comparison.
But this happens on several industries, not only FAANG. Most companies that get off the ground with a skeleton crew will start hiring more as soon as they get more cash, doesn't have to be from investors. That's because they want more things faster. The reality is grimmer, though: 9 women can't make a baby in 1 month. And 9 women can't also make an adult-sized baby in 9 months.
It sounds like I struck a nerve. Does it hurt you knowing there are a lot of developers out there that contribute more in a week than you do in several months?
Software engineering has huge ranges of salary. Not only do I doubt you get paid the same, but I also doubt you get paid as much as my best performing lead engineers.
Agreed, I'd argue the even higher meta problem is that we're afraid to hurt feelings.
Because there is such a big drive for growth and more bodies, we have to tip toe around crap work and crap work ethics. Nothing wrong with being a bad dev perse if you accept it and learn to improve. But instead we ego stroke juniors and poor quality developers, inflate their contributions, inflate their title and then wonder when everything is shit at the end of the day. Those poorly trained juniors and boot camp devs are now eng managers, CTOs and loud evangelists that are driving our industry into the gutter with billions of funding.
Hmm, just need to fork it and call it "Nopilot" (though personally I find Copilot to be sometimes handy, it makes plausible guesses when I am doing something repetitive, I'm sure that's because my code isn't DRY enough etc. but it saves me a little time)
Just an observation: the native apps that are better are built on massive, proprietary frameworks developed by platform owners. In some ways, most of the web frameworks are trying to be the One True Web Platform Framework that similarly makes it possible for developers to make better web apps the way that say, AppKit or UIKit makes that possible on Apple's platforms.
So I'm left wondering, did the frameworks fail or did the web developer community fail?
Until or unless something equivalent to those platform frameworks becomes part of the browser, the situation won’t change significantly. The fact is that the current interface between web app and browser, that is, HTML, CSS, and JavaScript (and WASM), is highly inadequate for productivity applications.
I think “highly inadequate” is a big exaggeration. As an example, many people are fairly productive on slack. (Arguably more productive than email.) Native slack is nearly equivalent to web. Another example is google sheets or google docs.
I do agree native had some big benefits. Probably the biggest hindrance to web is that you’re basically downloading the binary from the server and then executing it in line. That leads to a ton of performance optimizations (like optimizing bundle size or async loading things) which make things fairly complicated (webpack) and can also lead to UX compromises. None of which is a problem for native apps.
That's 100% fair, but I don't think that's really the argument of GP. The argument is that the interface is inadequate, not that the result itself is.
With enough force you can definitely fit a square object in a round hole. Not saying I totally agree. IMO it heavily depends. I've seen my share of native monstrosities (seen lots of XAML/WPF apps with zero accessibility and very slow, although WPF is lovely from a dev perspective), and "productivity" covers a lot of ground.
The new breed of try-hard build-big "final"-framework would-be's plays against what the web is, trades incrementalism stengths for probably-not-all-that totalization. trying to decide & pick the one true platform, giving up the more malleable low level platform is the problem, is what the closed-world expectations have ruined the web's great prospects with.
I hate web apps with a passion. Has anyone ever used the iOS YouTube app? This thing is hailed as a marvelous examples of web apps working "like native." Bollocks. It's an absolute mess. If one of the wealthiest companies in the world can't build a decent web app, there is no hope for anyone else.
Web apps get the user 80% of what they want with 50% of the cost, so the business case is clear. That's why these apps are proliferating. They are not proliferating because they're good, as good, or better. They're just not.
The trillion-dollar company is often less able to produce quality products than a small shop, because of bureaucracy and hubris and dead weight.
So while I agree that the business case for web apps incentivizes mediocrity, there is still plenty of hope for everyone else. If your startup has a really great web app with awesome UX, you will have a pretty big advantage.
True, but not so fast: small shops are also constantly trying to copy the bureaucracy, the hubris and dead weight. Eventually I expect all of them to get there.
By web app, I believe the author meant apps served in a browser, not a mobile app made from html/css/js with a cordova/capacitorjs wrapper. Also, google has problems delivering functionality in general. Have you tried using google home? It never works properly for me, and for some reason I can't log in to my family's home with my google workspaces account. Most things except for search are a mess there, and search is getting worse as well.
I think it's because most apps are influenced way too much by "standards" that people expect but actually sucks.
I have 2 examples of web apps working really fucking well:
Missive (https://missiveapp.com/) is an email client web app with features that should be standard in every other email client. I run it via Firefox and it's the best damn email client I've ever used. It uses JS extensively but they use it very well.
Migadu (https://migadu.com/) is the absolute golden standard of email inbox services. It's not fancy but oh my days, does it work and it's so damn easy to use. Every time I have to set up emails I get happy because these guys spent exactly 0 seconds on trying to appeal to people who equates parallax-scrolling with competence. Migadu does not use JS as far as I can tell.
As a counterpoint to both, Office 365/Outlook is a gigantic, floppy donkey dick of a web app that makes all sorts of assumptions, has 3 different interfaces and is generally just a pain in the ass to use. Somthing as simple as logging in/out is a muddled, technical mess that makes it a god damned chore to change accounts.
Exactly. Money's always the answer behind why large companies seemingly make bad software. There's always some cost-benefit analysis behind every decision.
This article could be about 20% as long if you cut out the rambling. You say modern SPAs are terrible but don't provide any examples of what you mean by that. You go on a completely irrelevant rant about Icelandic culture in the middle. But it's a post complaining about how modern web dev sucks, so HN will upvote it anyway!
We don't need more talking about what apps are good and what are bad. We need more doing and building (coincidentally what the people actually working in these ecosystems and trying to improve them are engaged in)
I'm a little confused on the article. I remember developing web apps where I could just start a LAMP on my machine but times have changed and we should accept that everything is getting complicated.
He should try mobile development, it's much more worse in my opinion. I'm a mobile dev fyi.
Agree.
At this point I'm a little bewildered at what iOS and Android has brought to mobile development that wasn't already in j2me, at least in a preliminary way.
At the start of iOS it was "look, pretty icons" and "no more permission popups". But, most platforms had icons anyway, and all the apps now get the annoying popups for the same set of permissions (except for networking) that was the bug bear of j2me (at the time). iOS and Android got rid of all the permission boxes for a better UX.
So, we're left with the expectation of a seamless web. But it never occurs because of endless permission issues, of which the platforms try to reduce by providing a one-size-fits-all cloud solution (iCloud and Google), both of which allow relentless advertising & gamification of our personal information. It's so bad that governments all over are racing toward digital "identities".
I'll take the simplicity of j2me any day thankyou. The development environment was GBs smaller and faster and the problem set is pretty much the same. At least with j2me there was the idea of responsibility to data usage (I generally only get 3G, even now) as opposed to 40MB apps that download 100's more. All the apps I developed during that time were offline first and synced when available.
The real complexity I see is in the consumption of video and hi res images as well as the interaction with other apps (eg share menu). Unless I'm seriously mistaken, everything else was in j2me (even opengl). All the other complexity is in the tools we have these days & endless rhetoric on which ungainly monster is better than the next.
This is true, I started with Java ME too with Eclipse IDE then proceeded to integrating Blackberry SDK and then it started from there. Mobile development's phase right Blackberry disappeared never recovered, it went 0 to 100 real quick from that very moment.
I was still hacking some J2ME for a Nokia app then Samsung released an Android phone and I was like wtf is an Android.
Honestly, I find it more than a little ironic that the author is arguing against single-page web apps and then praises native apps as “doing it right”.
Native apps are built and run in a way FAR closer to how single page web apps work in that the browser becomes the runtime environment and the server becomes mostly just an API.
And you are not likely to ever see the kind of fluid results from a traditional full page render web app that you do in mobile.
And mobile at this point has more edge cases to develop for than modern browsers.
But the difference is, as I see it, there’s way less bickering on mobile over “the right way” because most developers use the native stack and instead of complaining about everything they think is hard, they roll up their sleeves and build a good product.
Maybe if we all stopped whining about single page apps being hard and started worrying about making a great product, then we too could match the quality of a nice native app built by devs like yourself.
There’s more bickering on Android development in my opinion. There’s the Google way of Android Architecture Component, and now the up and coming Jetpack Compose. There’s huge mobile teams on Viper. Mid size companies on MVVM or RxJava. There’s the rest on MVP, MVC, etc.
I love web dev because the community is willing to address problems; engineering follows product in a tight feedback loop. Developer experience becomes a target to optimize for, and not just the overall ROI. In almost every other CS/software field save for machine learning, reinventing the wheel is a privilege reserved for large corporations. If you ever worked in the embedded space (or talked to engineers who did) or even a non-IT industry, it is obvious that many fundamental protocols and technologies never changed since the days "their grandpappy did it". Web may move fast but the UX is constantly improving and the value delivered is increasing every day; there is a very good reason why people from almost every other industry is trying to get into software engineering.
Even if you don't like the current JS stack, there is nothing stopping you from pulling a Figma and building everything from scratch in WebAssembly, after all, game developers (before Unity) do it all the time. Show me a burnt-out web developer and I will show you ten hardware/mechanical/electronic engineers who would give their arms and legs for the ease of tooling and the large salary of web devs. You are being paid 6-7 figures to debug a bunch of UI code in a high level scripting language, and your biggest complaint is that the churn is too bad?
I think the article needs to kick off with examples of less better web apps, otherwise I am a bit confused a out what is wrong and what he wants us to fix instead of bikeshedding frameworks.
Why are tech choices having such an impact on the experience to begin with? Aside from small apps, the web developer is just the guy making the magic happen, there should be an actual product owner somewhere who is making decisions on experience. It's up to the devs to make the experience happen, but poor design is a different problem than what framework you wrote the code in.
I wish. Compared to 10 or 20 years ago developers have a lot less say in these things.
10 years ago I would be able to block some megalomaniac plan from an UI/UX designer outright.
Today it is borderline impossible, as designers tend to have 10x more power than before and don't really care about feedback from developers. Everything is done on a "Parallel Track" sprint, also known as "Waterfall", since designers can't really do Agile. So development teams get a bundle of tasks that's a black box until you open up and there's all kinds of custom shit copied from products from trillion-dollar companies, done by teams of hundreds or thousands.
Honestly having to work like this makes me consider leaving this industry.
...followed by a 1 hour debate on slack and a series of meetings with an ever increasing number of non-technical managers before a likely unsuccessful outcome.
Arguably worth trying anyway, but let's say I want to say "No" to 3 of these new misfeatures per week. Do I still have time to do any work? Probably not. So I'll pick my battles.
One thing I've noticed is that almost all new web frameworks I come across nowadays seems to be catered towards search engine optimization (i.e.: getting 99/100 on Google PageSpeed Insights).
It's a very specific kind of performance. It doesn't measure long running apps, interactions and visualizations.
When you play a video game you might have to download and install it first, which might take quite a while, but we don't say "this game has bad performance" because of that.
Lighthouse measures that first second or first few seconds after you visit a site at position 0/0. It doesn't care about how much bloat comes after you scroll. It discourages you to pre-load resources (including JS and CSS) which you don't _immediately_ need for that first hit.
It is really about content and marketing websites with low interaction. You can absolutely make good interactive applications that also get a high Lighthouse score, but the overlap is not necessarily always feasible or the best tradeoff to make.
> Lighthouse measures that first second or first few seconds after you visit a site at position 0/0. It doesn't care about how much bloat comes after you scroll. It discourages you to pre-load resources (including JS and CSS) which you don't _immediately_ need for that first hit.
This is true - or rather, was true. We're making lots of progress on a mode to measure performance beyond the initial cold load. Check out https://web.dev/lighthouse-user-flows/
> It is really about content and marketing websites with low interaction.
Well, that isn't our intention at all, but I can see how to might come to that conclusion. We've only recently started investing in how to measure interaction. See https://web.dev/inp/
Thanks for the work you and your team are doing. Lighthouse, while not perfect, is a helpful tool. It sounds like it would be a fun project to work on.
AFAIK they still use it, and it plays a huge role (based on my fairly limited experience). A couple months back I did a little consulting on the side with a small ecommerce store that had recently struggled to get any traffic despite all their SEO analytics showing green except for page speed. They ranked poorly for just about every keyword that they had previous been doing well on.
Bit of CDN/Caching, upped them to a better hosting plan and they're back on track again - rankings are back to normal and page speed is way up.
A separate signal is user leaving SERP, going back to it and checking another site - somewhat clear sign of failure of a site to deliver. And users leave if your page loads slowly.
I build https://kinopio.club a spatial thinking tool that's also (I hope) a good website. I'm only one person without the resources to build separate ios/android apps so the website needs to work everywhere and be fast and responsive
The header and footer, including the logo hide on mobile during scroll because mobile safari doesn’t send enough info to position them accurately during flicks. Are you seeing flickering on iOS or android?
I released an update that allows you to turns off sticky card animations for new users, and after 20 cards are created, 'unlock's the feature and allows you to turn it off if you'd like.
I think they need to be more smooth in order to look natural, so I would try with some basic easeInOut or easeOut instead of the jumpy movement and then go from there.
In my opinion interaction animations should be really fast, otherwise they slow down some users.
I also wonder if there is a timing issue with invoking the animations. Some of them get invoked immediately on hover and some of them seem to be delayed.
it's hard to tell whether the animations are functioning as designed for you, it's possible they're bugging out/skipping frames on your device. If you're able to email me a screencast (hi@kinopio.club, or post on the forums at club.kinopio.club) that'd be super useful for tweaking things
Some ui issues aside one can see the great potential of web apps with their ability to be shared and used without installing. I love native apps but for solo developers this is the best alternative.
i would like to have the header/footer fixed in position but safari ios doesn't accurately send position information during swipe scrolling that would let me keep it fixed without jittering. (Because the app is designed to be pinch scrolled and maintain the header, I can't just use regular css position:fixed)
> The problem is that doing a Single-Page-App ‘properly’ is even more expensive than doing a mediocre Single-Page-App, which in turn is considerably more expensive than an old-style multi-page web service.
Is that true?
Having some experience with both styles, I don’t see a difference in time (after the learning curve for doing something new).
Totally agree we should be talking about great web and native experiences rather than the tech that sits under them. Whenever I see people arguing about frameworks I think of the midwit meme where the opposite ends just use what they know.
Web dev community should just acknowledge the simple fact that typesetting engine from 80s (HTML) and legacy hacks built around it (css and js) is not a good foundation for making modern apps.
It's like trying to build apps with a spreadsheet cells (it's actually possible, but we don't do it, thankfully) by throwing a lot of abstractions and frameworks on top of it that would hide the ugliness of the foundation layer.
Once this acknowledged, thing can start moving fast. A lot of good stuff has been developed and discovered in the web dev community, but the foundation of it is total and complete crap.
Also I believe we are in the good moment of time to build a better web, as there's virtually one major player in browser engines market and it's easier to experiement with adding support for new foundation layers (think wasm or dart vm).
But so far, no matter how much hacks web community can come up with, web apps will be predominantly associated with horrible experience, performance and quality.
I'm not entirely disagreeing with you, but there are many great web apps, or not? Google Maps, Docs, Sheets and Slides - they all kind of work without being super horrible?
It's not like most desktop apps are these great beacons of perfection either, most desktop apps are just as bad as most web apps. Just think of all the ridiculous enterprise applications that people had work with in the past. And the same applies to mobile apps from the respective app stores.
Also since a lot of the applications that you HAVE to use on a daily basis have moved to the web, you might have this feeling that desktop applications are great because the only exposure you have are applications that you choose to install yourself. But if all these apps were desktop applications you might have a different opinion.
Sorry for unsolicited continuation of the thread, bu I was just typing a comment into StackOverflow box, and zoomed the page a bit (with pinch zoom). Page now shifts itself to the bottom and right on every keypress.
Can't imagine seeing weird stuff like that on any desktop app, and yet, for the web apps bugs like this are just normal. We blame developers ("..didn't use this media query api properly..") and ourselves ("oh, it's because the zoom.."), but that the crux of the problem – you have to spend time and efforts to stop these leaking bugs from foundation layer.
Good points. But I'm not saying that all web apps are horrible. My point is it takes enourmously more effort, time and money to make web app that doesn't suck (compared to non-web alternatives).
Google can afford this, for sure.
Likewise the desktop apps – there are quite a lot of horrible apps, for sure, but competition usually drives the quality up (I mean competition for the better frameworks and/or languages). Nobody uses Tcl/Tk anymore. Naturally in monopilized markets, enterprise software faces no competition and usually sucks.
Yes, this is one of the best and fastest webapp out there, and yet dragging a selection in that app on my top-of-the-line computer feels more sluggish than doing the same thing in GIMP on a 2010 netbook.
Clearly a lot of effort went into making Photopea, but that performance gap doesn't feel like it's ever going to be plugged, no matter how clever we get at using HTML.
Sure, Flutter is a great example, how "web app" is just the same app with a crappy foundation layer. Actually the first time you run side-by-side Flutter app as 1) native desktop, 2) iOS (in Simulator), 3) Android (in emulator) and 4) Web (in a browser) , it bends a mind a little bit. All 4 apps looks and feel the same, and only window frame tells the difference.
"fibre connection" does not tell anything about how fast it is unfortunately.
on a related note; iirc average internet connection has around 100ms rtt, 20mbps down, 1mbps up, 500ms fifo buffering and .5% packet loss.
optimise for this if you want more marketshare than city dwellers near us costlines.
I occasionally work with a guy who opens every email with "Hey <person>" regardless of who it is and it always struck me as informal. I wonder if it's a similar rank-blindness in conversations.
Swede here, it’s really prevalent in Sweden to just use first names irrespective of who it is. Your boss, boss’s boss, your doctor, or as the case of an athlete winning an Olympic medal: even the prime minister, even calling her a nickname[1].
I believe it is partly due to a reform back in the days dropping thou and titles[2].
I feel like in the American workplace we’re expected to be casual and friendly with everyone while also being acutely aware of rank. So if your boss sends a silly meme, you might want to reply with another silly meme. But if you go over your boss’s head and send an unsolicited “hey <first name of the boss’s boss>” email, you might still be in for repercussions. Just because everyone is on a first name basis and drinks beer together at happy hour doesn’t mean they’re really on equal footing.
I suppose the point is that you aren't. You talk to your immediate superior instead. And then your boss does the same if he/she decides there is a talk to talk.
Doesn't want to talk frameworks, but has this as a problem....
"The problem is that doing a Single-Page-App ‘properly’ is even more expensive than doing a mediocre Single-Page-App, which in turn is considerably more expensive than an old-style multi-page web service. If a lack of resources is the limiting factor for the project, any suggestion that involves more work or more resources makes the problem worse."
The point about the frameworks is that having single page, and multi page apps done properly isn't more expensive than even static stuff.
It looks like they have created their own problems here.
I usually like Baldur's writing but this is all over the place.
> What we end up with is an industry plagued with cost and time overruns. Projects go way over budget and get delivered much too late, if ever.
I was there pre-Angular and pre-React and cost and time overruns were a thing. They always were and always will be. Unsatisfying software is a broad and complex problem and no matter your views, you will find confirmation. Agile coaches will see it in waterfall, Baldur sees it in SPAs, and somebody else will see it in tracking and metrics.
My take would be: Apps that do the job they’re supposed to do in service of the user, without friction for the user. The pleasantness for the user doesn’t come from the app proper, it comes from the user being able to achieve what they want without friction. It’s a bit like what is often said about film music: The best is the one you don’t notice is there.
c'mon. the only way to build a coherent web app these days is to hire one coder freelance, then work him daily for 15 years into an early grave. Then build version 2.0.
i think it’s right associative. the author wants good apps to use as examples (and since he’s comparing these against an assumed default, ”good” really means “better” here). the author can’t find many of these better apps, so he’s requesting more of them: more better apps.
i admit it’s confusing to have in the headline (easier to make sense of after he’s covered the background), but the language is sort of constraining in this area. he probably wanted to avoid “more good apps” as that would be mistaken for “better apps” (which isn’t exactly what he’s after). “more better apps” is less ambiguous, but only so long as you expect that the author isn’t one to intentionally misuse language.
Better yet, end the web and the internet and there would be no disagreements at all about anything web related.
The only for disagreements you'll have will be with the people in your immediately vicinity. And to solve that, I propose another solution:
End all human life, and then we've solved every single problem of the world. No racism, no sexism, no bigotry and no problems for as far as the eye can see.
I don't think so. The webpage of this language that "solves" web dev is so spectacularly broken that it managed to crash Chrome for me completely. I have literally thousands of tabs open and it trucks along fine for weeks on end ... unless one of the tabs is elm-lang.org. 15 minutes later and every Chrome window and tab was still unresponsive, so I resorted to killing the process, then quickly closed that tab before it fully loaded on restart. Maybe it works for you? That's kind of even worse as it's not even consistent.
Oof, yeah, that's bad. But I doubt it could be very common or they would have noticed it and fixed it, eh?
Would you consider telling them about the issue? I realize that might be hard since their "community" links are on the bottom of the page that kills your browser, but no doubt they'd really appreciate hearing about it.
In any event, one regression (however painful) for one user doesn't invalidate the whole idea of Elm. It's still the right way to go even if it blows a tire from time to time.
Interestingly there is at least one effort to write an Elm-to-native compiler, so your Elm "web" app could also be compiled to native code & GUIs.
I was just thinking about this exact topic less than an hour ago.
We seem to be slowly reaching a consensus on various best practices and tools, and it's obvious which ones are winners and will be here to stay for the foreseeable future, despite the inevitable churn. There will always be discussion (and disagreements) on how to make things better, and when I was less experienced, I (ironically) used to be way more opinionated on the best approaches, but over time, after witnessing the evolution over the years, it always comes down to one core tenet: choose the best tool for the job. Even this isn't purely objective, as it can vary depending on your individual circumstances, time constraints, and the people (and their skillsets) available to do the work.
I've always imagined a future where almost anyone can quickly assemble high quality applications (both software and hardware) to meet almost any human need. We're seeing this trend with all of the low-code/no-code tools popping up, but to take it up a notch, it will probably require some standardization of tooling and practices. I think this is the next phase of our technological evolution and would be a great solution to the author's request for "more better web apps".
I think the author's points are very valid. Despite the continuous debate and ever shifting landscape of what is fashionable in web development (which I find very tiresome to keep track off), most web apps pale in comparison to native apps. Why is that? Whatever it is, it seems elusive for web developers to come close in quality and experience. And they seem stuck in endless iterations of "maybe this will work ... nope".
And of course with web assembly, you can actually replicate a lot of that native experience in a browser and even run very slick 3D games or emulators running entire desktop operating systems. So, it's not necessarily true that browsers are not technically able to do a better job because they clearly are. It's just that web developers and web developer tool makers are just not doing a great job leveraging those capabilities judging from most popular web apps.
My personal observation as somebody who focuses on backend mostly but also needs to worry about having a decent frontend for that backend is that we are being limited by an industry that is perpetually suffering from maturity issues. Both at the technical and personal level (I agree with the author on this).
A lot of the technologies and tools are simply not great when compared to native app tooling and they only compare to each other. So, you get these elm, angular, vue, react, etc. proponents each arguing how their preferred thing is better than the other thing and why. What they should be debating is the huge gap with native apps that none of them come close to bridging and why that gap fundamentally is still a thing in 2022. Why do people have to reinvent so many wheels to build a simple web app? And why is delivering a slick web app experience such an elusive goal?
The future is 25 years ago when any idiot with an MBA could open up Visual Basic, Delphi or whatever and click together a working native app in an hour or so. A lot of really bad enterprise software was created that way. But nothing similarly usable ever really got popular for the web and I'd argue using most enterprise web apps is a similarly dreary and miserable experience as it was a quarter of a century ago. It's gotten harder to build those but they still suck.
Mobile developers have had visual UI tools like that available all along even though many would prefer not using those. But in terms of the underlying component frameworks, mobile development is a lot more similar to how Delphi and VB worked 25 years ago than modern react development. IMHO, those component frameworks are a key reason people like the "native" experience. The web based equivalent component frameworks (there are many) are just a lot hackier, messier and ux challenged.
Ironically, with Blazor, you can target the web and get your good old VB code to run in a browser. Not sure if that's the future but it sure seems to work. Again, the problem is not browser limitations but tools. If MS can target browsers with their decades old tools, web developers can raise the ambition level a lot more as well. Why don't they? Apple and Google have a stake in the current status quo of the web sucking more than their mobile walled gardens. So don't look to them for answers.
Sometimes web tech is the best tool for the job. Sometimes native is the best tool for the job. Sometimes it's a mix of both. Again, it depends on the requirements and the resources you have at your disposal.
Also, I think part of the reason you often see lower quality web apps (including the underlying tech) is because the barrier to entry is much lower than with native. If we improve the tooling around web tech, the average person can produce higher quality web apps. As it is now, I agree native generally comes with a better experience, but I see no reason why it can't be eventually matched on the web.
The thing is: the debates aren’t naive. They’re exactly trying to workout the details.
When someone says, hey forget postgresql and just use SQLite, and someone else says good luck if you hit n rows, and someone says it can easily handle n+m rows with x memory …
Well they’re trying to solve the problem of (more) better apps with their limited resources. That’s essentially the problem of development.
Asking people to solve development without discussing the tools and methods is to misunderstand the nature of the discussion.
It’s like telling authors to get on and write better novels and not waste time with their opinions on character and plot.