Hacker News new | past | comments | ask | show | jobs | submit login
It’s time to kill the web app (plan99.net)
1003 points by raindev on Sept 23, 2017 | hide | past | favorite | 702 comments



I find this unconvincing.

Every negative thing said about the web is true of every other platform, so far. It just seems to ignore how bad software has always been (on average).

"Web development is slowly reinventing the 1990's."

The 90s were slowly reinventing UNIX and stuff invented at Bell Labs.

"Web apps are impossible to secure."

Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.

"Buffers that don’t specify their length"

Is this really a common problem in web apps? Most web apps are built in languages that don't have buffer overrun problems. There are many classes of security bug to be found in web apps, some unique to web apps...I just don't think this is one of them. This was a common problem in those C/C++ programs from the 90s the author is seemingly pretty fond of. Not so much web apps built in PHP/JavaScript/Python/Ruby/Perl/whatever.


The security aspect was an interesting part of this piece, because one of the main reasons webapps took over from Windows apps is because they were perceived as more secure. I could disable ActiveX and Java and be reasonably confident that visiting a webpage would not pwn my computer, which I certainly couldn't do when downloading software from the Internet. And then a major reason mobile apps took over from webapps is because they were perceived as more secure, because they were immune to the type of XSRF and XSS vulnerabilities that webapps were vulnerable to.

Consumers don't think about security the way an IT professional does. A programmer thinks of all the ways that a program could fuck up your computer; it's a large part of our job description. The average person is terrible at envisioning things that don't exist or contemplating the consequences of hypotheticals that haven't happened. Their litmus test for whether a platform is secure is "Have I been burned by software on this platform in the past?" If they have been burned enough times by the current incumbent, they start looking around for alternatives that haven't screwed them over yet. If they find anything that does what they need it to do and whose authors promise that it's more secure, they'll switch. Extra bonus points if it has added functionality like fitting in your pocket or letting you instantly talk with anyone on earth.

The depressing corollary of this is that security is not selected for by the market. The key attribute that customers select for is "has it screwed me yet?", which all new systems without obvious vulnerabilities can claim because the bad guys don't have time or incentive to write exploits for them yet. Somebody who actually builds a secure system will be spending resources securing it that they won't be spending evangelizing it; they'll lose out to systems that promise security (and usually address a few specific attacks on the previous incumbent) . And so the tech industry will naturally oscillate on a ~20-year cycle with new platforms replacing old ones, gaining adoption on better convenience & security, attracting bad actors who take advantage of their vulnerabilities, becoming unusable because of the bad actors, and then eventually being replaced by fresh new platforms.

On the plus side, this is a full-employment theorem for tech entrepreneurs.


> A programmer thinks of all the ways that a program could fuck up your computer; it's a large part of our job description. The average person is terrible at envisioning things that don't exist or contemplating the consequences of hypotheticals that haven't happened.

I'm not sure programmers are much better. There's a long history of security vulnerabilities being reinvented over and over. Like CSRF is simply an instance of an attack first named in the mid 80s ("confused deputies"). And why are buffer overflows still a thing? It's not like there's insufficient knowledge about how to mitigate them.

And blaming this on the market is a cheap attempt to dodge responsibility. If programmers paid more than lip service to responsibility, they'd push for safer languages.


> And blaming this on the market is a cheap attempt to dodge responsibility. If programmers paid more than lip service to responsibility, they'd push for safer languages.

If programmers paid more than lip service to responsibility, the whole dumb paradigm of "worse is better" would not exist in the first place. As it is, we let the market decide, and we even indoctrinate young engineers into thinking that business needs is what always matters the most, and everything else is a waste of time (er, "premature optimization").


> If programmers paid more than lip service to responsibility, the whole dumb paradigm of "worse is better" would not exist in the first place.

I used to think like this but I've come to realize that there are two underlying tensions at play:

- How you think the world should work; - How the world really works.

It turns out that good technical people tend to dwell a lot on the first line of thinking.

Good sales/marketing types on the other hand (are trained to) dwell on the second line of thinking and they exploit this understanding to sell stuff. Their contributions in a company, in general, are easier to measure relative an engineer since revenue can be directly attributed to specific sales effort.

"Worse is better" is really just a pithy quote on how the world works and it's acceptance is crucial to building a viable business. Make of that what you will.


The world doesn't always work that way though. There are plenty of areas where we've decided that the cost of worse is better is unacceptable, and legislated it into only being acceptable in specific situations. For example, many engineering disciplines.


The prime directive of code made for a company really is to increase profits or decrease costs, though. Most of the time just getting the job done is all that matters. Critical services and spacecraft code are exceptions.


Yes. Which is precisely the root of the problem. Increasing profits and decreasing costs are goals of a company, not of the people who will eventually use the software (internal tools are an exception). The goals of companies and users are only partially aligned (the better your sales&marketing team is, the less they need to be aligned).


> And blaming this on the market is a cheap attempt to dodge responsibility.

How many hacks, data breaches, and privacy violations does it take for consumers to start giving a shit?

Also, any programmer will tell you that just because an issue is tagged "security" doesn't mean it will make it into the sprint. Programmers rarely get to set priorities.


> How many hacks, data breaches, and privacy violations does it take for consumers to start giving a shit?

There's a quote by Douglas Adams pops up in my mind whenever the subject comes up:

> Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.

This is the only explanation there can be for this. Every time there's a breach somewhere (of which there obviously are plenty), there's a big outrage. But those who should go "oh, could that happen to us, too?" choose to ignore it, usually with hand-waving explications of how the other guys were obvious idiots and why the whole thing doesn't apply to them.

This obviously goes for consumers and producers.


Exactly this. The last company I was in had a freelance sysadmin and a couple of full time devs. The sysadmin had been banging on for ages that we needed a proper firewall set up. It was only after we thought we had been hacked (it ended up being a valid ssh key on a machine that we didn't recognize), we checked and found at least half of the windows machines were infected with crap. Only then did they get the firewall. We decided not to admit our mistake about the ssh key, as it seemed like it was the only way to get things done.


> How many hacks, data breaches, and privacy violations does it take for consumers to start giving a shit?

https://en.wikipedia.org/wiki/Say%27s_law

In other words, it takes a better alternative to exist. Better can mean cheaper or faster or easier, a lot of things. That can be accelerated by the economic concept of "war" (ie. any situation that makes alternatives a necessity).


I don't think it's about "dodging responsibility" but just an examination of the tradeoffs involved in development. The code we're developing is becoming more transitory, not less over time. How secure does a system that is going to be replaced by the Next Cool Thing in 4-5 years need to be? It really depends on what you are protecting as much as anything.

The incentives for someone to break into a major retailer, credit card company, or credit bureau are much different from Widget Cos. internal customer service web database. What I think the article is missing, even though it makes alot of good points, is that if there's a huge paycheck at the end of it, there will always be someone trying to exploit your system no matter how well designed it is. And if they can't hack the code quickly, they'll learn to "hack" the people operating the code.


> And blaming this on the market is a cheap attempt to dodge responsibility.

You are oversimplifying. Dunno in what programming area you work (or if it's software at all) but "we work with languages X and Y" is something you'll find in 100% of all job adverts.

Tech decisions are pushed as political decisions from people who can't discern a Lumia phone from an average Android. That's the real problem in many cases.

That there exist a lot of irresponsible programmers is a fact as well.


buffer overflows used to be a thing in all software. Nowadays it's relegated to stuff written in C (essentially).

It used to be that RandomBusinessApp would hit this stuff, now most of it ends up in Java so it might still crash but usually it's mitigated better.


> If programmers paid more than lip service to responsibility, they'd push for safer languages.

Most programmers want to dio their job quickly and easily, and go home.


i disagree with one premise...web apps werent ever seen as a more secure alternative to windows apps. they were seen as easier to deploy. that was netscapes big threat to MS. You could deploy an app to a large audience easily. its hard to get across how hard things were back in the day. citrix came out as an option as well...same deal. easier to deploy.

people really thought activex was brilliant...until security became an issue. i can remember when the tide changed.

anyway, fair points otherwise. cheers.


Agreed. They are easier to deploy, even multiple times per day. This is one of their selling points even today compared to native mobile applications, which have other advantages.

Another advantage is that they are inherently available across OSes, usually across different browsers (but we know what it takes.)

Finally, they used to be much more easy to develop.

Tldr: larger audience, less costs.


I agree. Web apps were easier to deploy, centrally manage and deliver over desktop, assuming you had a stable connection. In fact it was often hard to get people to run apps on the web because internet was wither slow or ADSL was unstable. SaaS was considered risky.

The true definition of a full stack developer in those days would make today's definition of full stack faint.

You had to know how to setup hardware with an os with your software and databases, often having to run your gear in a datacentre yourself that you had to figure out your own redundancy for, all for the opportunity to code something to try out. Being equally competent in hardware, networking, administration, scaling and developing a web app was kind of fun. Now those jobs are cut into many jobs.

Activex was what flash tried to be.. The promise of Java of using one codebase everywhere.

Seeing webassembly is exciting.


> they'll lose out to systems that promise security (and usually address a few specific attacks on the previous incumbent

This happens in other areas besides applications as well. Programming languages, operating systems. This leads to an eternal re-invention of the wheel in different forms without ever really moving on.


Yep. Databases, web frameworks, GUI frameworks, editors, concurrency models, social networks, photo-sharing sites, and consumer reviews as well. Outside of computers, it applies to traffic, airlines, politics, publicly-traded companies, education & testing, and any industry related to "coolness" (fashion, entertainment, and all the niche fads that hipsters love).

I refer to these as "unstable industries" - they all exhibit the dynamics that the consequences of success undermines the reasons for that success in the first place. So for example, the key factor that makes an editor or new devtool popular is that it lets you accomplish your task and then gets out of the way, but when you've developed a successful editor or devtool, lots of programmers want to help work on it, they all want to make their mark, and suddenly it gets in your way instead of out of your way. For a social network, the primary driver of success is that all the cool kids who you want to be like are on it, which makes everyone want to get on it, and suddenly the majority of people on it aren't cool. For a review site, the primary driver of success is that people are honest and sharing their experiences out of the goodness of their heart, which brings in readers, which makes the products being reviewed really want to game the reviews, which destroys the trustworthiness of the reviews.

All of these industries are cyclical, and you can make a lot of money - tens of billions of dollars - if you time your entry & exit at the right parts of the cycle. The problem is that actually figuring out that timing is non-trivial (and left as an exercise for the reader), and then you have to contend with a large amount of work and similarly hungry competitors.


>concurrency models

We started out with OS threads (I guess processes came first but whatever) and now we're trying to figure out what the next paradigm should be. It looks to me like it's Hoare (channels, etc) for systems programming and actors for distributed systems, both really really old ideas. To be fair there are other ideas (STM, futures, etc) that fill their own niches, but they either specialize on a smaller problem (futures) or they're still not quite ready for popular adoption (STM). If this is cyclical then I think we're pretty early in the first cycle.

Sure, the spotlight moves from one model to the other and back, but that's because the hype train cannot focus on many things at the same time, not because the ideas go out of style.


> So for example, the key factor that makes an editor or new devtool popular is that it lets you accomplish your task and then gets out of the way, but when you've developed a successful editor or devtool, lots of programmers want to help work on it, they all want to make their mark, and suddenly it gets in your way instead of out of your way.

Only if it is open source. Seems like Sublime Text (just an example) has avoided this effect... perhaps evidence that open source is not the best model for every kind of software?


How do we fix this?


We don't. Learn to embrace it instead.

There's a flip side to everything. In this case, if you "fixed" this problem, it would imply a steady-state world where nothing ever changed, nothing was ever replaced, and nobody could ever take action to fix the things bugging them. To me, this is the ultimate in dystopias. It's like the world in The Giver or Tuck Everlasting, far more oppressive than the knowledge that everything we'll ever build will eventually turn to dust.

Or we could get rid of humans and let machines rule the earth? Actually, that wouldn't work either, these dynamics are inherent in any system with multiple independent actors and a drive toward making things better. If robots did manage to replace humans (ignoring the fact that this is already most peoples' worst nightmare), then the robots would simply find that all their institutions were impermanent and subject to collapse as well.


Is there no possibility of steady progress without having to continually discard good solutions and reinvent things (e.g. web development catching up with the 90s)? Someone on this thread said that our field has no institutional memory. Can we at least fix that?


You run up against Gall's Law [1]. The root cause is that many of our desires are actually contradictory, but because human attention is a tiny sliver of human experience, whenever we focus our attention on some aspect of the system we can always find something that, taken in isolation, can be improved. (I'd be really disappointed if we couldn't, actually; it'd mean we could never make progress). However, the "taken in isolation" clause is key: very often, the reason the system as a whole works is often because we compromised on the very things that annoy us.

Remember that in some areas, the web is far, far more advanced than software development was in the 90s. It's not unheard of for web companies to push a new version every day, without their customers even noticing. At my very first job in 2000, I did InstallShield packaging and final integration testing. InstallShield had a very high likelihood of screwing up other programs on the system (when was the last time Google stopped working because Hacker News screwed up the latest update?), because all it does is write to various file paths, most of which were shared amongst programs and had no ACLs. So I'd go and stick the final binary on one of a dozen VMs (virtualization was itself a huge leap forward) where we could test that everything still worked in a given configuration, and try installing over a few other applications that did similar things to make sure we weren't breaking anything else. We never did ship - we ran out of money first - but typical release cycles in that era were around 6 months (you still see this in Ubuntu releases, and that was a huge improvement on programs that came before it).

And this was still post-Internet, where you could distribute stuff on a webserver. Go back another decade and you'd be working with a publisher, cutting a master floppy disk, printing up manuals, and distributing to retail stores. You'd have one chance to get it right, and if you didn't, you went out of business.

The thing is, many of the things that made the web such a win in distribution & ubiquity are exactly the same things that this article is complaining about. Move to a binary protocol and you can't do "view source" or open a saved HTML file in a text editor to learn what the author did; programming becomes a high priesthood again. Length-prefix all elements instead of using closing tags and you can't paste in a snippet of HTML without the aid of a compiler; no more formatted text on forums, no more analytics or tracking, no more like buttons, no more ad networks (actually, I can see the appeal now ;-)). Require a compiler to author & distribute a web page and you can't get the critical mass of long-tail content that made the web popular in the first place.

You can see the appeal of all of these suggestions now, in a world where things have gotten complicated enough that only the high priesthood of JS developers can understand it anyway, and we're overrun with ads and trackers and like buttons that everyone has gotten tired of anyway, and a few big companies control most of the web anyway. But we wouldn't have gotten to that point without the content & apps created by people who got started by "view source" on a webpage.

[1] https://en.wikipedia.org/wiki/John_Gall_(author)#Gall.27s_la...


You make a lot of good points.

My concern, as readers who have seen some of my other HN comments may guess, is that the next time someone starts over, they'll neglect accessibility (in the sense of working with screen readers and the like), and people with disabilities will be barred from accessing some important things. "How hard can it be?", the brave new platform developer might think. "I just have to render some widgets on the screen. No bloat!" It's hard enough to make consistent progress in this area; it would help if there were less churn.

Edit: I guess what I (very selfishly) wish for is steady state on UI design and implementation so accessibility can be perfected. I know that's not fair to everyone else though. Other things need improving too.


As someone who had to help "teach" JAWS about UI elements on a friend's computer back in '05-'07, accessibility should be the first concern. If anything, that's one upside to Google - the spider "sees" like a blind person. The better-crawled a page is, the more likely it is you won't lose massive page elements.


FWIW I'd consider it the opposite of selfishness to want to improve accessibility.


Selfish that, in my heart of hearts, I want what benefits me and my friends (some of them), to the exclusion of what the rest of the industry seems to pursue (churn in UI design and implementation, pursuing the latest fashion in visual design).


> Move to a binary protocol and you can't do "view source" or open a saved HTML file in a text editor to learn what the author did

I disagree with that. Using binary formats to exchange data between programs doesn't preclude using textual formats at the human/machine boundary. Yes, "view source" needs to be more intelligent than just displaying raw bytes, but that is already the case with today's textual formats. Everything is minified and obfuscated, so the browser dev tools already have to include a "prettify" option. Moving to a binary protocol would turn that into "decompile" and make it mandatory, but it effectively already is.

Requiring a compiler to author and distribute a web page is no different than requiring a web server or a CGI framework or the JS-to-JS transpiler du jour. It adds another step in the pipeline that needs to be automated away for casual users, but that's manageable. Even if the web world moves to binary formats (as WebAssembly seems to indicate), your one-click hosting provider can still let you work with plain HTML/CSS/JS and abstract the rest; just like it abstracts DNS/HTTP/caching/whatever.


> the browser dev tools already have to include a "prettify" option. Moving to a binary protocol would turn that into "decompile" and make it mandatory, but it effectively already is.

This will be a legal problem. At least in my jurisdiction, transforming source code (which is what prettifying is) is not subject to legal restrictions, but decompiling binary machine code into readable source code is forbidden by copyright law. (For the same reason, I'm concerned about WASM.)


Steady state progress .. towards what?

That one single goal we all share and agree on, and know exactly how to get to so progress can be steady and incremental and continuous?


That's not a million dollar question but one worth several 10's or even 100's of billions. If you can find the answer to it you'll push us across the hump and away from this local oscillating maximum.


You make the perfect product

You strive for excellence

You keep improving

Like Jiro did with sushi

And then the product dies with you


> The security aspect was an interesting part of this piece, because one of the main reasons webapps took over from Windows apps is because they were perceived as more secure. I could disable ActiveX and Java and be reasonably confident that visiting a webpage would not pwn my computer, which I certainly couldn't do when downloading software from the Internet.

Indeed. And then we made sure all interesting data (email, business data, code (github/gerrit etc)) was made available to the Web browser - so pwning the computer became irrelevant.

It's indeed like the 90s - from object oriented office formats, via macros to executable documents - to macro viri - and total security failure. Now we have networked executable documents with no uniform address-level acl/auth/authz framework (as one in theory could have on an intranet wide filsystem).

So, yeah, I kind of agree with the author - we're in a bad place. I used to worry about this 10 years ago, by now I've sort of gotten used to the idea, that we run the world on duct tape and hand-written signs that says: "Keep out - private property. Beware of the leopard.".


> I could disable ActiveX and Java and be reasonably confident that visiting a webpage would not pwn my computer

Unfortunately, this is not entirely true. There were bugs in image processing, PDF processing (some browsers would load it without user prompting), Flash, video decoders, etc. IIRC even in JS engines, though those are more rare. Of course, you could go text-only, but then you couldn't properly access about 99% of modern websites.


When there would be a bug in PDF processing, you end up with a RCE, right?

But downloading an EXE is basically allowing arbitrary code execution on your machine no matter what. So _even with the security bugs_, webapps are basically safer than installing a native app on desktop, at least in its current state.

I see your point though. There are still a lot of entry points we need to be careful about


It doesn't help that curl | sh has become trendy.


The Javascript security model breaks down in the case of file:///, no overflows are required. The security you get today is more flimsy than you probably think. And it used to be far worse.


> "Web development is slowly reinventing the 1990's."

> The 90s were slowly reinventing UNIX and stuff invented at Bell Labs.

Yes, this reminds me of: "Wasn't all this done years ago at Xerox PARC? (No one remembers what was really done at PARC, but everyone else will assume you remember something they don't.)" [1]

> "Buffers that don’t specify their length"

> Is this really a common problem in web apps? Most web apps are built in languages that don't have buffer overrun problems. There are many classes of security bug to be found in web apps, some unique to web apps...I just don't think this is one of them. This was a common problem in those C/C++ programs from the 90s the author is seemingly pretty fond of. Not so much web apps built in PHP/JavaScript/Python/Ruby/Perl/whatever.

Most injection attacks are due to this; if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately.

1: https://www.cs.purdue.edu/homes/dec/essay.criticize.html


> if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately.

That's not really the problem. The problem is there is no distinction between data and control leading to everything coming to you in one binary stream. If the control aspect would be out-of-band then the problem would really go away.

Length prefixes will just turn into one more thing to overwrite or intercept and change. That's much harder to do when you can't get at the control channel but just at the data channel. Many old school protocols worked like this.


Thank you.

This is the important takeaway here. Changing the encoding simply swaps out one set of vulnerabilities and attacks for another. Separating control flow and data is the actual silver bullet for this category of attacks.

Unfortunately, there’s rarely ever a totally clear logical separation between the two. Anything you want to bucket into “control”, someone else is going to want the client to be able to manipulate as data.


I'm having a hard time seeing how having separate control and data streams would have an effect here. Using FTP to retrieve a document isn't more secure than HTTP... the problem is in how the document itself is parsed. If you added a separate side channel for requesting data (a la FTP), you'd still have the issue of parsing the HTML on the other side.

Granted, if you made that control channel stateful, you'd make a lot of problems go away. But you could do that with a combined control/data stream too.

What am I missing? How would an out-of-band control channel make things easier?

That said, I think many issues with the web could be solved by implementing new protocols as opposed to shoehorning everything into HTTP just to avoid a firewall...


It makes sure that all your code is yours and that no matter what stuff makes it into the data stream it will never be able to do anything because it is just meant to be rendered.

So <html>abc</html> would go as

<html><datum 1></html> where datum 1 would refer to the first datum in the data stream, being 'abc' and no matter what trickery you'd pull to try to put another tag or executable bit or other such nonsense in the datum it would never be interpreted. This blocks any and all attacks based on being able to trick the server or eventual recipient browser of the two streams to do something active with the datum, it can only be passive data by definition.

For comparison take DTMF, which is inband signalling and so easily spoofed (and with the 'bluebox' additional tones may be generated that unlock interesting capabilities in systems on the line) and compare with GSM which does all its signaling out-of-band, and so is much harder to spoof.

The web is basically like DTMF, if you can enter data into a form and that data is spit back out again in some web page to be rendered by the browser later on you have a vector to inject something malicious and it will take a very well thought out sanitation process to get rid of all the possibilities in which you might do that.

If the web were more like GSM you could sit there and inject data in to the data channel until the cows came home but it would never ever lead to a security issue.

No amount of extra encoding and checks will ever close these holes completely as long as the data stays 'in band' with the control information.


I guess what I'm getting at is that it isn't HTTP that's the issue -- it's HTML. I'm all for a control channel in HTTP. But you're still stuck parsing <html><datum_1></html>, and it is difficult to think about reorganizing each tag as a separate datum. At what level do you stop converting the data into separately requestable bits? How would you even code it? And making the tags themselves length-prefixed (like csexp's) wouldn't entirely solve the problem.

I could easily see making <script> and <link> resources required to be separately requested (like images are now -- ignoring data/base64 resources), but we're back to redefining HTML.

I'm not arguing against that...

It's really hard to have these types of debates though, because everyone focuses on different problems of the HTTP/HTML webapp request/response cycle. Like you said, adding separate control/data channels would help, but that doesn't solve SQL injection attacks (which is a whole other class, but that's not really an HTTP/HTML issue, it's a backend issue and I don't see how you'd avoid that with a simple protocol change). Simply making HTTP stateful could potentially solve a different class of session highjacking, etc...

There are so many attack vectors that I think it does make sense to think about what a replacement for HTTP/HTML would look like. Most of these problems arise from trying to re-engineer a document format (HTML) to support interactive webapps. We should think about how to do this better... (without recreating ActiveX -- shudder).


> I could easily see making <script> and <link> resources required to be separately requested (like images are now -- ignoring data/base64 resources), but we're back to redefining HTML.

This has been implemented in HTTP (not HTML); you can enable the requirement right now by serving your pages with an appropriate Content-Security-Policy header.


Or, e.g. my preferred encoding of HTML:

    (html "abc")
This guarantees that no matter what is inside "abc" it simply can't escape into the control stream:

    (html "This is not (malicious \"boo\")")
This is just a pretty display of what would actually be these bytes:

    (4:html29:This is not (malicious "boo"))
It doesn't matter what one puts in the atom: it can't escape and damage the control stream.


The two are very different:

    (html "user content")

    user content := " (script "something malicious")"

    (html "" (script "something malicious"))
the length-prefixed version cannot escape in this way.


SQL injection attacks are an excellent example where code and data are mixed. One solution is to do a lot of clever escaping of 'attackable' characters that instruct the DBMS to stop treating a character string as data and start executing things [1]. Escaping attackable characters attempts to partition data from code. This usually works but not perfectly.

Or, run your data through stored procedures instead. It took me a while to figure out why stored procedures were so much more secure than regular queries. I finally figured out it was because a stored procedure does exactly what the grandparent post says: It treats all inputs as data with no possibility to run as code.

[1] https://xkcd.com/327/


Hmm. I'm going to have to disagree about Stored Procedures providing security. You can do all sorts of bad things using stored procedures that may result in unintended code execution!

Perhaps the most naive example: https://pastebin.com/acQqhDvy

I think they're more useful for organization and abstraction than security. Then again, a well organized and smartly abstracted system can lead to better security!

But I think bind parameters are probably a better example of security.

Binding effectively separates the data from the logic. So you define two separate types of things, and then safely join those things together by binding them. It doesn't matter too much whether that happens in the application making a call to the database or in the database in a stored procedure. Obviously this same concept can be applied at many different points along the application stack. The analogous concept in the UI is templating. You define a template and then safely inject data into that template.


> I finally figured out it was because a stored procedure does exactly what the grandparent post says: It treats all inputs as data with no possibility to run as code.

This isn't well defined. Take this pseudocode stored procedure (OK, it's a python function):

    def retrieve_relevant_data(user_input):
        if user_input == 1:
            return BACKING_STORE[5]
        elif user_input == 2:
            perform_side_effects()
            return BACKING_STORE[1]
        else:
            return "Go away."
You can provide any input to that. You could think of this as a function which "treats all input as data with no possibility to run as code" (it never calls eval!). But you could also usefully think of this as defining a tiny virtual machine with opcodes 1 and 2. If you think of it that way, you'll be forced to conclude that it does run user input as code, but the difference is in how you're labeling the function, not in what the function does.

The security gain from a stored procedure, on this analysis, is not that it won't run user input as code. It will! The security gain comes from replacing the full capability of the database ("run code on your local machine") with the smaller, whitelisted set of capabilities defined in the stored procedure.


> The security gain comes from replacing the full capability of the database ("run code on your local machine") with the smaller, whitelisted set of capabilities defined in the stored procedure.

The security gain is that it you are only able to run queries that the DBA allows you to. If you can't write arbitrary queries, you won't get arbitrary results. If you can only run a stored procedure, you are abstracted away from those side effects. Another way of saying this -- the security risk is shifted from the app developer to the DBA. Someone is still writing a query (or procedure code), so there will always be some risk.


The security gain is that it you are only able to run queries that the DBA allows you to. If you can't write arbitrary queries, you won't get arbitrary results. If you can only run a stored procedure, you are abstracted away from those side effects. Another way of saying this -- the security risk is shifted from the app developer to the DBA. Someone is still writing a query (or procedure code), so there will always be some risk.

This could also be achieved with a well written microservice/package that developers go through without depending on dba.


It doesn't sound like we disagree?


The philosophy and semantics are an interesting side issue, but I'd say the default meaning of those words is that your data, in the SQL system, is not treated as SQL code.


Parameter-ized query builders are possible in every SQL library.

String escaping SQL? How is anyone thinking that is still a thing in 2017? The problem has been solved for two decades


Not just that, but they are great for sharding too.


I'm not following you, how so?


Stored procedures are bad in so many ways - they harder to deploy and revert than code, harder to unit test* , harder to refactor and every implementation that I have ever seen that has business logic in stored procedures instead of microservices/packages/modules have been a nightmare to maintain.

* At least with .Net/Entity Framework/Linq you mock out your dbcontext and test your queries with an in memory List<>

https://msdn.microsoft.com/en-us/library/dn314429(v=vs.113)....


> harder to deploy and revert than code

Agree.

> harder to unit test

Disagree. I've implemented unit tests that connect to the normal staging instance of our database, clone the relevant parts of the schema into a throw-away namespace as temporary tables, and run the tests in that fresh namespace. About 100 lines of Perl.

That was five years ago. These days, it's even easier to do this correctly since containers allow you to quickly spin up a fresh Postgres etc. in the unit test runner.


It’s even easier and faster when you don’t have to use a database at all and mock out all of your tables with in memory lists. No code at all except your data in your lists.


> easier and faster

It also need not be correct. If you're only ever doing "SELECT * FROM $table WHERE id = ?", you're fine, but a lot of real-world queries will use RDBMS-specific syntax. For example, from the top of my head, the function "greatest()" in Postgres is called "max()" in SQLite. How is it called in your mock?

Mocking out tables with in-memory lists adds a huge amount of extra code that's specific to the test (the part that parses and executes SQL on the lists). C# has this part built in via LINQ, but most other languages don't.

By the way, I see no practical difference between "in-memory lists" and SQLite, which is what I'm currently using for tests of RDBMS-using components, except for the fact that SQLite is much more well tested than $random_SQL_mocking_library (except, maybe, LINQ).


You are correct, if I were doing unit testing with any other language besides C#, my entire argument with respect to not using a DB would be moot. But I would still rather have a module/service to enforce some type of sanity on database access.

The way that Linq works and the fact that it’s actually compiled to expression trees at compile time and that the provider translates that to the destination at runtime whether it be database specific SQL, MongoQueries or C#/IL, does make this type of testing possible.


Yeah, I thought the same thing until I found a colleague who was very fond of calling exec_sql in stored procedures, with the argument being a concatenation of the sp arguments.


I think you mean parameterised queries. Stored procedures are a slightly different thing.


> if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately.

If this was the case, it would be near-impossible to write HTML by hand. And if you're writing HTML with a tool (React, HAML etc.), the tool could be doing HTML escaping correctly instead. This isn't an issue with HTML, it's an issue with human error.


> This isn't an issue with HTML, it's an issue with human error.

All security issues are due to human error. Those are solved by building better tools.

> If this was the case, it would be near-impossible to write HTML by hand.

If, besides the text form, there would be a well-defined length-prefixed binary representation, we could simply compile HTML to binary-HTML, which would immediately made the web not only safer, but also much more efficient (it's scary if you think just how much parsing and reparsing goes on when displaying a web page).


One could build something similar by using a set of "conventional" canonical S-expressions: https://en.wikipedia.org/wiki/Canonical_S-expressions


Prefixes for character length? Is that a better choice than byte size or would it even matter?


If you have an issue with human error and don't design your programmed tool to avoid letting the errors out into the world, then it is the fault of the tool.


I'm not sure what the argument you're putting forth is. All of the HTML-generating tools I'm aware of (barring dumb string templating tools) work sufficiently well and prevent human error.

My point is that there's nothing wrong with HTML. HTML isn't a tool, it's a format for storing and transmitting hypertext. If you're using React or HAML or any of the other HTML-generating tools, you're effectively immune from XSS. I'm putting forth that developers aren't using effective tools (shame on every templating engine that doesn't escape by default), and that calling the web as a platform bad is a bit nonsensical. It's like saying "folks are writing asm by hand and their code has security issues, therefore x86_64 is insecure".


The prevalence of XSS suggest that the web ecosystem has failed to produce the sort of tools you suggest. If such tools actually existed and were good, people would use them and web app exploits would be a curiosity rather than an expectation.

However, no such tool exists. I think there's a deeper issue here: the sheer number of ways you can generate XSS alone, even ignoring the other exploit types, is far beyond what any tool is capable of stopping. Look at one of the XSS holes found by Homakov that I linked to from my article:

http://sakurity.com/blog/2015/06/25/puzzle2.html

The XSS occurs on this line of JavaScript, not HTML:

    $.get(location.pathname+'?something')
That's a simple line of JQuery that does an XmlHttpRequest to the same page that was loaded with an additional parameter. By itself, it is not an XSS. But if the backend is/was running Ruby on Rails (presumably some old version by now) then it could turn into an XSS due to a combination of features that all look superficially harmless.

Show me the tool that would have avoided that type of exploit, without already knowing about it and having some incredibly specific hardcoded static analysis rule.

When I argue that the web is unsafe by design, it's because cases like that aren't rare, they're common. To paraphrase Veekun, scratch the surface of web security and you'll find yourself in a bottomless downward spiral, uncovering more and more horrifying trivia.


> If such tools actually existed and were good, people would use them and web app exploits would be a curiosity rather than an expectation.

I think you're missing another two obvious explanations:

1. Lack of education when picking a tool (copy paste from bad SO answers is a frequent source of bad code).

2. Developers don't care. If it works, why bother wrapping your head the rest of the way around to understand why it works or whether it's secure?

> By itself, it is not an XSS. But if the backend is/was running Ruby on Rails (presumably some old version by now) then it could turn into an XSS due to a combination of features that all look superficially harmless.

Sure, ERB before RoR essentially had security turned off by default (as I noted). And this issue could happen with any other non-web system, turning into any other kind of vulnerability. This isn't a web problem, it's a system security problem. Bad inputs in a "native" app could lead to security issues in the output of apps on other devices. Badly implemented binary data decoders in a desktop application could do far worse than a XSS in the browser.

This problem is misattributed as a "web problem" because there are far more complete systems on the web than there are on nearly any other platform. It's like the tired argument that Mac is more secure than Windows, but Windows has historically had an overwhelmingly outsized market share, making OS X issues far less valuable to attackers.

> When I argue that the web is unsafe by design, it's because cases like that aren't rare, they're common.

I don't disagree that these issues are common, but I disagree that the web is unsafe by design. The web is a platform. If everyone wrote their Python APIs without a framework, I can guarantee they would be littered with security holes. If everyone wrote their own text renderer in C++, just displaying strings on the screen would be a dangerous task.

There are good tools that make it really hard to fuck up on the web. Seriously, try to accidentally have a XSS vulnerability in an isorendered React app with Apollo. The problem is folks that want to jQuery-jockey their way across the finish line and don't understand that they are making terrible mistakes.


I think it's easy to blame developers for the failings of their tools and just say, well, they should be more educated or more serious. That'd be great, but there are too many problems with the web to educate users on how to avoid them. Even skilled developers can't reliably avoid every minefield. Look at the attacks by Homokov that I linked to, or read up on HEIST, or cross site tracing, or SSRF attacks.

How many developers do you think might have written a web server in their time, or will do in the next 10 years? And how many know will pass URL components straight through to glibc for resolution, as is the obvious way to do it, and create an exploitable SSRF vuln on their network? How many developers will have even heard of this type of problem?

New ways to exploit weird edge cases and obscure frameworks crop up constantly - it is a full time job even to keep up with it all. At some point you can't blame people walking through a minefield because they keep getting blown up. The problem is the mines.

this issue could happen with any other non-web system, turning into any other kind of vulnerability. This isn't a web problem, it's a system security problem.

That's just not the case, sorry. Have you ever actually written desktop apps that use binary protocols? It's a web problem:

• It relies on the over-complex and loose parsing rules for URLs

• It relies on unexpected behaviour in one of the most popular web libraries

• It relies on bizarre and unexpected behaviour in XmlHttpRequests

• It relies on the fact that web apps routinely import code from third party servers to run in their own security context.

I have been programming for 25 years and I have never seen an exploit like that before in managed desktop apps using binary protocols to a backend.

Seriously, try to accidentally have a XSS vulnerability in an isorendered React app with Apollo.

An isorendered React app with Apollo? I think that may be the most web thing I've heard all week ;)

I think I'll take the bet:

https://medium.com/node-security/the-most-common-xss-vulnera...

That article shows the patterns I cover in my article:

• Buffers can get terminated early, even in a theoretically "XSS-proof" framework.

• JSON can get interpreted as code

• Even experienced web developers can't get it right

If you've never written a desktop app before, I'd suggest grabbing IntelliJ or NetBeans and trying it out. TornadoFX is a good framework to try.


Thanks for a mention. Yes, I find web deeply broken. If any big company decides to reengineer it from scratch: I'm available to help for free :)


Well put. I agree with all of that essentially.


"Most injection attacks are due to this; if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately."

How so? If you allow the user to send arbitrary data, and your handling of that data is where the problem lies, it isn't going to matter whether the client sends a length-prefixed piece of data. You still have to sanitize that data.

HTML, and whether it uses closing tags or not, is pretty much irrelevant to the way injection attacks work, as far as I can tell. Maybe I'm missing something...do you have an example or a reference to how this could solve injection attacks?


If the length is not pre-defined, the input has to be parsed to look for the closing tag. That makes your code vulnerable if the input tricks it into finding the wrong closing tag. But if the length is fixed, you don't have to parse it at all. That would avoid a whole class of vulnerabilities.


True, assuming that programmers don't compute code (HTML,SQL, etc) from user input and miscompute the length of a fragment.

It would be interesting to see if this idea could work in practice.


A simple example could be the Twitter API's handling for references (URLs/hashtags/at-user mentions) in a tweet [0]. The tweet text is returned in one field, and all references are listed in a different field together with first/last character index within the tweet where that reference was found. You don't need to parse the tweet text yourself, just display it as plain text and insert links where the references say you should.

[0]: https://dev.twitter.com/overview/api/entities-in-twitter-obj...


This isn't some theoretical design. Any native application that uses a binary protocol framework like protobufs over TCP to communicate with the backend will benefit from this approach.


> protobufs over TCP

I guess it would have to be protobufs over TLS, and abuse port 443, to get through firewalls from hell.


If you can say, “the next 450 characters are plain text and should be rendered as such”, then even if the text includes script tags (or whatever), they won’t be parsed or executed.


This seems like an argument for strong types. Which is reasonable. But, one could do that with closing tags, too. We already know that relying on a programmer to specify the length of data is prone to bugs (C/C++). And, you can't trust the client to specify the length of data.

I feel like this is conflating two different problems and potential solutions.

I'm not saying injection attacks aren't real. I'm saying that whether HTML uses closing tags or not is orthogonal to the solution. But, again, maybe I'm missing something obvious here. I just don't see how what you're suggesting can be done without types and I don't see how types require prefixing data size in order to work.


There is a .innerText property which works perfectly fine for this if you want to ship your content inside JSON and then plug it in...


> Most injection attacks are due to this; if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately

No it wouldn't. It wouldn't fix sql injection and it also wouldn't fix the path bug the op linked.

The problem is not length, it is context unaware strings. The problem is our obsession with primitive types that pervade our codebases.


SQL injection is not a web problem. If you create SQL queries based on any untrusted (e.g. user) input on any platform, you have to escape/explicitly type your input.

Injection in general is simply a trust problem. If you can trust all inputs fully (hint: you can't, because nobody can), then you will never have an injection attack.


SQL injection is a problem with SQL, which is similar to problems with HTML. SQL was created as human-friendly query languages, it wasn't created to be built from strings in a programming language. Proper database API should be just a bunch of query builder calls and with this API SQL-injection is not possible.


SQL injection is a problem with incompetent developpers. Most languages have simple constructs to make them immune to injections, like parameterized queries.

If you are exposing code to an untrusted, hostile environment (which is pretty much the web), no language that does anything useful will protect you against not caring about security.


Not all queries can be parameterised - I'm not aware of any DBMS that allows for the parametiersation of identifiers (e.g. table and column names) or variadic operators and clauses (e.g. IN() and optional predicate clauses), this is why "Dynamic SQL" is a thing - which comes with the inherent risk of SQL Injection.


There are many reasons to create SQL dynamically, but I can't think of a good reason for the table name to come from the client.

Even if you absolutely need to inject a string in a sql query, sanitizing it is trivial. In .net / MS SQL, a simple x = x.Replace("'","''") does the trick. For any other common data type, strong typing should be sufficient to prevent any injection.


Like LINQ


Exactly.


The point is that if you know the length of some data up-front before starting to parse it, you don't have to inspect the data in any way to see when it ends. This means that you don't need to know what the SQL injection looks like and protect against it, or what JS looks like to sanitise your inputs – the problem does go away to a large extent.


That doesn't make sense.

Obviously nobody is going to be typing length prefixes manually, so our tools are going to do it for us.

Now we're back where we started where you accidentally inline user content as HTML, except now HTML has the added cruft of someone's HN comment solution.


The solution suggested is not-html, a specific thing for web apps, where data is separate.


This doesn't do anything for Bobby DROP TABLE injections, right? The whole thing is a user-supplied slug, there's no source of truth on how long a user's name is. Or am I missing something?


Bobby tables would be considered data. Or should be. And hopefully it would be obvious that it doesn't belong in the code section.

But like you I'm not totally convinced. I think this idea would make it easier for people trying to do the right thing to get it right; but for the blissfully ignorant? Might not help at all. Either way it needs a more flushed out spec.


This absolutely fixes Bobby DROP TABLE. The source of truth on how long the user's name is is just the length of the user-supplied slug.

From the XKCD:

   Robert'); DROP TABLE Students; --
The issue here is that

    '); 
Is being intepreted as the end of a string; it assumes that there will be something like:

    format("SOME_FN('%s');",user_name)
going into SQL, and this fools the system.

SQL solves this already with parameterized queries, and many HTML libraries also solve this in various ways, but if it were instead:

    format("SOME_FN(%d:%s)", len(user_name), user_name)
then there is no value you can put in user_name that will let you escape the function call.

Length prefixes are one way of working this, but only scratch the surface of the issue. As others have pointed out, it's also the fact that the control elements are inline with the data.

    <p:25><script:14>somethingBad()
Will still run somethingBad(). You are at least sandboxed to the containing element though, so restricting certain elements to only appear in parts of the HTML tree could prevent this (e.g. if all scripts were disallowed in BODY then merely constraining user-generated content to the BODY would work; right now you could still get hit by someone including </body> in their content.


>The problem is not length

Oh thank God. I'm going to forward this to my wife.


"Buffer? I don't even know her!"

Ha ha. I'll get my coat.


Droll. But wasn't context unawareness part of the problem too?


Definitely. It's a fatal flaw of PHP, and any SQL library that lets you build queries from concatenated strings.


Even when the sender tells you the length of the data to expect the receiver still needs to read every thing that is sent?

Or were senders always going to send true values for length and data?

Really, you can't trust any sender, so the data should be validated anyway.

There's been known attacks where a sender says here's 400 bytes and the receiver stupidly trusted that length specifier, and the sender's sends more (or less) crafted bytes and BOOM!

Known good data start and end specifiers, which HTML has, seems a good answer when dealing with untrusted senders (read:everyone)


This might be the biggest dichotomy I've yet seen on HN. An opinion piece voted all the way to the top of the front page (with a clickbaity title, might I add), yet the top comment soundly debunks the article's arguments.

Yeah, this is why everybody clicks on the comments link first.


Mike Hearn (the author of the original article) is a bright dude, who is well-known in several tech circles, which may explain the high ranking for the post here on HN.

I'm not intending to dismiss him outright; he may have an interesting follow-up. I guess I'm just much more optimistic about the web than he seems to be, and more critical of everything that's come before than he seems to be. I think Mike is about the same age as me, and probably has a similarly long history in tech, so I can't really pull the "hard-earned wisdom and experience" card in this conversation. I think I just disagree with him on this, and that's not a big deal.

One of us might be right. (But, I think betting against the web is crazy.)


We can assume that many HN readers are closely related to Web programming. Either they do it themselves or their wage gets paid because their employers' business depends on Web apps.

If the article is right that it is close to impossible to hire a Web developer that understands all Web security issues and knows to mitigate them, it does not come as a surprise that there is fierce criticism to the article. It basically says you are doing a hopeless job and your employers' business model is flawed.

I'm not a Web developer, but I find the article very convincing. From what I follow headlines Web programming changes very quickly and the frameworks change all the time. Meaning that smart people are not happy with what is available, writing new stuff. Yet I don't think security has been the primary driver for any new framework. They are still parsing text. So let's see whether the author has any fundamentally different approach in his next post (if anybody remembers to read it)

Disclaimer: I work in embedded and our company advertises to be very secure. I know that our security sucks.


The author definitely has valid arguments about web's security but I think the rest of his arguments are all lazy, anecdotal and not accurate. Comparing Google Docs to an old version of Office for example. They are incomparable firstly because they are running on completely different platforms. Office would take a long time to install while Google Docs are available almost instantly, they can be updated almost instantly and secondly include many more benefits that come with being part of the web.

I have myself developed GUI application using author's beloved C++ and Qt and I can admit its a far better designed and convenient experience compared to the web, but it's hardly possible to achieve the same amount of flexibility in UI/UX design that is available on the Web. I think the fact that things are changing so fast, standards are badly designed (at least initially) and there are so many inconsistencies are all only because web is a fast moving platform that requires the consensus of many players to happen and move forward. Also the amount of commercial interest and developers working on the web is incomparable to other platforms, hence the fast moving nature.


> flexibility in UI/UX design that is available on the Web

If you take advantage of that flexibility to create a UX that's very different from the standard widgets, it's likely to be inaccessible to blind users with screen readers. Check out this rant on HN from a blind friend of mine (a few paragraphs in for the part that's most relevant to this thread):

https://news.ycombinator.com/item?id=14580342

As far as I know, the most accessible cross-platform UI toolkit for the desktop is SWT. It uses native widgets for most things, and actually gasp implements the host platforms' accessibility APIs for the custom widgets. But, I can hear it now, somebody will say they hate SWT-based applications because they wreak of Windows 95. Oh well, fashion trumps all, I guess.


The author definitely has valid arguments about web's security but I think the rest of his arguments are all lazy, anecdotal and not accurate. Comparing Google Docs to an old version of Office for example. They are incomparable firstly because they are running on completely different platforms. Office would take a long time to install while Google Docs are available almost instantly, they can be updated almost instantly and secondly include many more benefits that come with being part of the web.

But even Google knew not to depend on the universality of web apps on mobile - they have native apps for both Android and iOS. Aren’t we already at a tipping point where most web access is done on mobile devices?


To somewhat counter all the negative comments here - I read this article and agree pretty much 100% with every sentence in it. There are probably more people who agree with the post - hence the upvotes.


Yes, I agreed with the entire article as well. I didn't see anything controversial or exaggerated about it.

Edit: Ok, maybe I could have predicted that lines like "HTML 5 is a plague on our industry" would ruffle some feathers. I guess I like a little snark in my criticism.


This kind of well thought out constructive criticism leads to interesting discussion and eventually improvements, even if I don't necessarily agree with it - hence the upvotes. Dissent should be welcomed, especially when it's in a well-meaning tone.


The article itself is click bait since the real meat is to be expected in the follow-up article.


Being the top comment means only it has more recent upvotes than other top-level comments, not that it has some special meaning that should be taken to have more meaning than the article. Back when they were still displaying points, you could see that the time of the upvote mattered almost as much as the actual upvote itself - meaning the comment with the most upvotes was not always the top comment.


Maybe this reflects insecurity (in the psychological sense) from the community of people who love the web? That in itself is interesting.

Fwiw, long live the web. It's imperfect, but it's open. I'll take chaotic freedom to tight control any day.


> I'll take chaotic freedom to tight control

FWIW, I'd take tight control if it was in pursuit of humanitarian values, such as accessibility for people with disabilities, rather than a company's bottom line. The chaotic freedom of the Web isn't very good for accessibility. Yes, yes, accessibility is possible, but in practice, very often it doesn't happen. See this rant on HN from a blind friend of mine (yes, the same one I posted elsewhere on the thread, but it drives the point of this comment home):

https://news.ycombinator.com/item?id=14580342


Yep, also on speed: it seems to me that the microsoft office suite for instance slows down every generation despite only having minor improvements and not actually being that different now than from 95. The nature of developers is that they will use whatever resources that they have. Faster computers don't necessarily mean faster applications but faster software development cycles from bigger teams with less need for the discipline and rigor that was required before.

In some ways we've traded speed for productivity.


Some would argue we've traded speed of execution for snake oil (so called speed of "productivity").

This tweet is an interesting visual that makes the same point: https://twitter.com/TheoVanGrind/status/888850519564984322


Software has become more increasingly complicated over time. Aside from adding new features, many companies have improved their efforts of providing accessible applications to a international audiences.

Let's not forget we've drastically increased security by writing applications in safer languages.

Oh, and newer applications tend to support a far wider variety of devices types, displays, inputs, etc.

Developers definitely be investing a lot more effort into improving the status-quo, but it's unfair to claim stuff is slower without improvements.


> it's unfair to claim stuff is slower without improvements.

I claimed no such thing. You're arguing against a statement I never made. Isn't that what's called a straw man argument?


Sorry, it wasn't my intention to misconstrue your comment.


> "the microsoft office suite for instance slows down every generation despite only having minor improvements and not actually being that different now than from 95"

I can't comment on most of the Office suite, but Excel evolved quite a bit since 95. Tables, PowerBI, Apps for Office, etc... If your needs are basic enough then even VisiCalc will do the job, but new features do make an impact for more demanding users.


That's not the point though. The example given in the article was Google Docs which has the same UI paradigm to Word. Under the hood it's massively different obviously with real time collaboration and constantly up to date syncing.

So, the reasoning is that UI is fundamentally the same (or worse if not done right) to native UI from the 90's, yet it hasn't had a massive speed increase which seems wasteful.

But modern UI in Office is only an evolution of what was there in the 90s and hasn't changed fundamentally either yet it doesn't feel any faster.

UI is only a small part of an app, a well designed app will have most of the work performed outside of the UI thread and it shouldn't feel any slower than a native implementation. My thoughts are rendering speed isn't the issue but application design.


> But modern UI in Office is only an evolution of what was there in the 90s and hasn't changed fundamentally either yet it doesn't feel any faster.

Sure, and Office in the 90s didn't feel any faster than the word processing I was doing on an Apple II+ in middle school. This is because the people buying (and building) software care about other things than processor efficiency. If it's generally fast enough for their normal use, they won't switch to a competitor.

The notion of "wasteful" here is in terms of something like RAM usage or processor instructions. But the correct measure is user time, including the number of user hours of labor needed to buy the device. The original Apple II cost 564 hours of minimum wage labor, and you were up over 1000 hours if you wanted a floppy drive and a decent amount of RAM. Today, a low-end netbook costs 28 hours of minimum wage labor.

Suppose you managed to put on that netbook something with the efficiency of Apple Writer or Office 4.0. Would anything be better? No, because the spare cycles and RAM would go unused. They would be just as wasted. No significant number of user hours would be saved. Or, alternatively, the in-theory cheaper computer they could buy would save them very few working hours.

As long as the user experience is as good, then the hardware notion of "wasteful" is a theoretical, aesthetic value, not a practical one.


You are ignoring battery life which is a useful consideration on laptops which appear to be the majority of pcs.

You are also ignoring the notion that a user may want to run a variety of apps, and not want to close or have any of the lot swapped out and pretending the hit on performance, resources, and battery life isn't cumulative.


I'm not ignoring them. I just didn't mention them in this comment. They fit in the same rubric.

A user can run a few things even on the low-end netbook. Tabs are cheap. And if they hit the limits of their machine, they can either pay in a reasonable number of user-minutes to actively manage resources or a modest number labor-hours to get something beefier.

I personally would like to see things better optimized. After all, I started programming on a computer with 4K of RAM. But I recognize that there is very little economic incentive to do so.


Isn't it kind of offensive to suppose that billions of users should pay more money so that hundreds of developers can use less efficient tools to build apps?

Isn't this backwards?


If those are the only factors and the numbers fall in particular ranges, sure. Otherwise, no.

Try doing the math here. How much cheaper would a netbook get if every single developer coordinated to reduce RAM and CPU usage? $5? Maybe $10? Looking at market prices, old RAM and CPUs are cheap. They consume basically the same physical resources as new RAM and CPUs, so price competition for not-the-best hardware is fierce.

Now ask those people if they'd pay $5 or $10 more for assorted new software features. Any features they can think of. And keep in mind that in that price range, people are paying $10 more to pick the color of their computer.

So sure, it offends me a little, because I like optimizing the things I pay attention to, like RAM usage. But if instead I optimize for the sorts of the things users care about, especially as reflected by what they'll actually pay for it becomes pretty clear: users don't care about the things I do.

So then the moral question becomes for me: who am I to impose my aesthetic choices on the people I'm trying to serve?


Trivializing making bad software that is slower on devices orders of magnitude faster by trying to equate it to netbook prices is a particularly bad methodology of comparison.

This is especially true as people are promoting everyone moving to a platform that is substantially worse.

How about getting more performance and battery life out of the same machine which effects more than netbook users.


I am not trivializing anything. I don't like bad software any more than you. However.

You may have noticed that we are in the technology industry. That means the final measure of our work is economic. The final judges of our work are our customers.

If you believe that X is better in our industry, you must be able to demonstrate that betterness in terms of user economics, in terms of user experience. You haven't yet, and you seem unwilling to even grapple with my argument in those terms. Are you planning on trying?


I dunno. When I was overseas I had a Kindle which lasted for something like two weeks between charges; that was awesome. Much better than my laptop which I had to charge every day for hours.

I wouldn't mind a true low-power laptop which only needed a charge twice a month.


Eink displays only use however much the battery inherently loses when not changing pages. If you only read 500 screen of text that month then it only consumed a trickle of battery x 500. Your screen itself consumes power every second its on and you also ask much more than rendering text.

What you propose is interesting though none the less. What is the most battery life that can reasonably be packed into a device that is modest but still useful.


Sure, but that won't come from people programming differently. The laptop backlight alone is a few watts. If your battery is 40 watt-hours, you're not going to get to 2 weeks of usage no matter how little the CPU gets used.


Yes, so it's pointless for the author to say that a problem with Web Apps is that they're slower than native apps. It's redundant now days and a well designed web app using modern techniques should not feel any slower to an end user than a desktop app, in fact with the advanced rendering engines within modern web browsers they can feel more responsive and more usable than native.


I feel like this advice is coming from some alternate universe where this is actually so.


> "But modern UI in Office is only an evolution of what was there in the 90s and hasn't changed fundamentally either yet it doesn't feel any faster."

Evolution of a UI isn't as important as evolution of the features the UI exposes. As for whether it feels any faster, depends on what you're doing. To give an example, Excel functions can be calculated using multiple CPU cores, which AFAIK wasn't a feature of Excel in the 1990s. You'll only see that speed up if you've working with a large enough volume of formulas. Measuring speed by UI speed alone doesn't get you very far.

All that being said, you won't find me disagreeing with the fact that desktop apps are bloated (web apps even more so). I've experienced responsive desktop apps running on a 7.14MHz CPU. The fact that we've thrown away most of the hardware improvements since the 1980s should be clear to anyone paying attention.


That's precisely the point. The author of the article was complaining that web applications are slow and compared it to Windows 95.

And my point is that web apps have a lot of features that didn't exist back then, and because of feature additions Office and other native applications don't exactly feel snappy either.


> "That's precisely the point"

That was the general point, but I was responding to a side comment that I disagreed with.

> "because of feature additions"

Adding features does not require slowing an application down. The reason modern apps (desktop and web) are slow is to do with inefficient use of computing resources, which has very little to do with available features.


That's why I said:

> UI is only a small part of an app, a well designed app will have most of the work performed outside of the UI thread and it shouldn't feel any slower than a native implementation. My thoughts are rendering speed isn't the issue but application design.

at the start. :) So, we're in agreement.


Can you run web apps in a multithreaded environment? UI remains the largest overhead in a web app in my opinion..

Or, how much speedup would you estimate, if we convert all GoogleDocs functionalities into Word97? I'd estimate 1000 times. :) Or perhaps, the computation power for drawing a cursor alone will far exceed the whole Word97.


> Can you run web apps in a multithreaded environment? UI remains the largest overhead in a web app in my opinion..

Yes, you have webworkers for multi threaded development. They're basically independent applications which run on different threads and you pass messages (which are simply objects) between them. The browsers themselves are also moving their layout and rendering engines to be multithreaded.

A well designed app would do very little on the UI thread and would pass messages between the UI thread and the webworkers, it would also spin up webworkers on demand to offload work. It's not as easy as some environments to develop in, but it's also fairly straight forward once you make the effort to do it.

If I was designing react for instance I'd have all the virtual dom / diffing stuff being handled by a webworker and then would only pass the updates through to the UI when computation is completed.

> Or, how much speedup would you estimate, if we convert all GoogleDocs functionalities into Word97? I'd estimate 1000 times. :) Or perhaps, the computation power for drawing a cursor alone will far exceed the whole Word97.

Whatever the speedup would be the speedup the users would likely not notice or will only notice a slight improvement.

And yes, drawing the cursor as a 1px wide div is computationally intensive, I guess you're referring to that article posted on HN awhile back that VS Code used 13% of the CPU just to render the cursor? :) Doing stuff outside of content editable is not ideal for writing applications as you lose a lot of system settings (like keyboard mappings, cursor blink speed, etc) that the browser automatically translates to the built in cursor.


> Yes, you have webworkers for multi threaded development. They're basically independent applications which run on different threads and you pass messages (which are simply objects) between them. The browsers themselves are also moving their layout and rendering engines to be multithreaded.

Yes I'm actually referring to this. The programming model. Workers are great if you can divide and conquer the problem and offload (exactly what you have mentioned). But the messaging payload would be high under some circumstances when you have to repetitively copy duplicate a lot of data to start a worker. I don't have hands-on experience with web workers but I think it is unlikely to solve the messaging overhead without introducing channels/threads. Workers are more like processes. And currently they don't have Copy-On-Write. Of course we may see improvements over time, but this is to gradually reinvent all the possible wheels from an operating system, in order to be as performant as an OS.

> A well designed app would do very little on the UI thread

I partially agree. It may do little, but in turn, the consequence may be huge. This is because DOM is not a zero-cost abstraction of a UI. It does not understand what the programmer really want to do if, say, he/she constantly ramping the transparency of a 1px div. Too much happens before the cursor blink is reflected onto a framebuffer, compared to a "native" GUI program. I think it will be very helpful if the DOM can be declarative as in XAML, where you can really say <ComplexGUIElement ... /> without translating them eventually into barebone bits. Developers are paying too much (the consequence) to customize this representation.

> Whatever the speedup would be the speedup the users would likely not notice or will only notice a slight improvement.

There won't be a faster-than-light word processor but I really want it to: 1. Start immediately (say 10ms instead of 1000ms) when I call it up 2. Response immediately when I interact (say 1ms instead of 100ms) 3. Reduce visual distractions until we get full 120fps. Don't do animations if we don't have 120fps. 4. If the above requirements can always be satisfied by upgrading to a better computer.

The speedup will guarantee 4) and make the performance scalable. But currently the web apps lag no matter I use a cellphone or a flagship workstation. This clearly indicates that the performance of a web app does not scale linearly with computation power, and this is not about how much javascript is executed (that part will scale I believe).


> But modern UI in Office is only an evolution of what was there in the 90s and hasn't changed fundamentally either yet it doesn't feel any faster.

Sorry, but this is absolutely untrue. The Ribbon UI introduced in Office 2007 was a massive change functionally and visually. You went from a static toolbar that would just show and hide buttons to live categories which not only resize but change their options and layout as you customize or resize the window. There's now drop downs, input fields built in, live previews in the document as you hover over tools and options, and more.

Same for the new Backstage UI introduced in Office 2013 for saving files, viewing recents, and other file and option operations. You have full screen animations and interactions.

Hell, Microsoft even made the text cursor fade in and out instead of blinking, which needs more processing power.

Could Microsoft have optimized it more? Yes. But they definitely have added tons to it since the 90s and even mid-00's to justify why it's slower.


But the original article was saying that the UI paradigms are the same but the interface is slower. The UI paradigm on the web is as far removed from 90s Windows as modern Windows is if not more.

All these points are no different to how web tech is evolving UI so should be discounted the same way that web technology is.


Excel hasn't evolved at all since 2003. They added a couple new chart types and changed some colors. But functionally they haven't made any significant change. In fact some grey controls have litterally not been updated in 20 years (try clicking the fx button near the formula bar with the same broken search feature since the 90s).

There are lots of things they could do. Linking data between spreadsheets or between excel and powerpoint sucks (a significant part of the user base needs to prepare decks and reports that contain lots of charts and numeric tables).

They could learn from Apple's approach with numbers where a worksheet is a canvas on which you can place multiple tables or charts or diagrams, which makes a lot more sense than the single grid per worksheet approach (think having to display two tables one above the other, you are forced to align columns of different widths, and how does the top table overflow?).

Users who need to script or create UDF are stuck with a VB6 editor that hasn't seen any update in 20y and an antiquated language.

I could continue the list for a while. These are basic core features. There might be 1000 people in the world who use power BI, and only because their IT dept set it up for them. But millions of users who's life would be made easier with the suggestions I made above.


> "They could learn from Apple's approach with numbers where a worksheet is a canvas on which you can place multiple tables or charts or diagrams"

You can do this with Excel also. When was the last time you used Excel?

> "There might be 1000 people in the world who use power BI, and only because their IT dept set it up for them."

The Power BI features in Excel come ready to use out of the box. Clearly you've never used them, but they're by far the best new features in modern Excel. Any power user of Excel that isn't exploring them is missing out.


> You can do this with Excel also. When was the last time you used Excel?

How do you do that then?


Mark separate areas on the same worksheet as tables, set chart location to be the same worksheet as the tables. If you're bothered by the gridlines those can be turned off. Not much to it really. You can also create dashboard-style content with PowerView (which is one of the PowerBI features built into Excel).


No need to be condescending, I am a heavy excel user, possibly more than you.

Tables may be fine in Excel for data but useless for any custom logic, which is what I use the most excel for. I am not aware that tables overflow with a scrollbar like Apple's approach allows. If you need to add more rows to the top table, the bottom table goes off screen. If the top table contains a very wide column, the bottom table needs to have the same column width. These are all inconveniences that apple's approach solves (and wouldn't be very hard to implement in excel while preserving backward compatibility). I don't see how Excel tables solve any of that.


> "No need to be condescending, I am a heavy excel user, possibly more than you."

Believe what you want.

> "I am not aware that tables overflow with a scrollbar like Apple's approach allows."

If scrollbars matter to you then you can use Power View, which is one of the Power BI features available in Excel. To get a better idea of how it works, take a look at this short video:

https://m.youtube.com/watch?v=f6QS13RtrmM


Visicalc? How about a web app - Google sheets? It gains features every day and it's imminently accessible.

Numerous similar apps depending on what online platform you prefer.


> "Visicalc?"

VisiCalc is the first spreadsheet program:

https://en.m.wikipedia.org/wiki/VisiCalc

The point I'm making by bringing up VisiCalc is, if your needs are basic enough, any spreadsheet program will do the job, even the first one. You'll only understand why the more modern desktop spreadsheet programs are more advanced if you have a reason to use the newer features.


There's nothing wrong with VisiCalc. It's incredibly basic (even for the time), but I still have a copy on my computer - I admit, though, that I use Lotus 123 more often.


Power users are the vector to spread Microsoft-only spreadsheet viruses.

This is what gets lots on most people.

The power users create some "nifty" spreadsheet that runs some "important" piece of a business. That "nifty" spreadsheet now requires Microsoft Excel and forces everybody in the company to have a copy if they want access to it.


Those power users are covering for the lack of resources and/or knowledge in a company's IT department. Excel may not be the best tool for long tail apps, but there's no arguing with its ability to quickly build useful tools. The power user that you see as spreading a virus is essentially successful as they can innovate more quickly than anyone else in the company. If open source tools gave this power user the same ability to rapidly innovate, then they should be made available to them (along with training on how to use this software).


Just imagine how slow MS Office could be as a web app.


You don't have to imagine. MS Office is available as a web app.

https://www.office.com/


Imagine? With wasm, I'm afraid you won't have to imagine in a few years...


Kinda like Google docs you mean?


No, Google has far fewer features. They didn't omit anything I miss, but it's lighter than Office.


> >*Every negative thing said about the web is true of every other platform, so far. It just seems to ignore how bad software has always been (on average). "Web development is slowly reinventing the 1990's." The 90s were slowly reinventing UNIX and stuff invented at Bell Labs. "Web apps are impossible to secure." Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.

I don't see how this is an argument in favor of the web. If anything, it re-enforces the accusation TFA made against it even more.

If "The 90s were slowly reinventing UNIX" then why would be recreating the 90s today a good thing?

If the 90s "slowly reinvented UNIX", then the correct thing to do would be for the web today to either be a fully modern 2017-worthy technology, or at least take its starting point from where the 90s ENDED, not re-invent the 90s.


"If the 90s "slowly reinvented UNIX", then the correct thing to do would be for the web today to either be a fully modern 2017-worthy technology, or at least take its starting point from where the 90s ENDED, not re-invent the 90s."

Since when has an inexperienced mob of people ever done the correct thing on the first try?

And, yet, the mob has continued the very fine legacy of those 90s (and 80s and 70s) software developers in pushing software into more places it's never been before. Somehow, it's working, despite the relative ignorance and stupidity of the average developer (myself included) in their understanding of history.

I think I'm being misinterpreted as saying the web is great because it has no flaws. Which is not my intention. The web has many ugly flaws. The web is great because of what it does despite those flaws. And, also, a lot of those flaws come down to inexperience, which we can't cure with technology. It seems likely it can only be cured by making the same dumb mistakes a few times until it becomes collective wisdom that it was a dumb mistake...the kind that gets beaten out of programmers very early during their learning process.

I guess I'm just more optimistic about the web-as-platform than most. I see all its flaws, I just don't think they should result in a death sentence.

But, if you show me something better, I'll gladly participate.


Better for what? The web is getting worse and worse for the users. Before this JavaScript craze it was predictable, bookmarkable, usable and reasonably performant.

Now it's slow, burns your battery, it's full of ads/tracking and anti-patterns like infinite scroll or SPAs and view source is useless.

For me, a site like HN or amazon (with some reservations) is the pinnacle of what the web is able to offer.


>Since when has an inexperienced mob of people ever done the correct thing on the first try?

Only web standards are not created by an "inexperienced mob of people" but by large multinationals, multiple CS PhDs, and seasoned developers.

And if we consider every generation of new developers an "inexperienced mob of people", then we have absolutely no claim to ever being called an industry and engineers.

>And, yet, the mob has continued the very fine legacy of those 90s (and 80s and 70s) software developers in pushing software into more places it's never been before. Somehow, it's working

Working in what? Mobile apps, counting in the millions, have actually "pushed software into more places it's never been before", and most of those are usually native, or done with non-web technologies (of course web stacks encroach there too). For most people, those mobile apps on their smartphones is how they interact most of the time with the internet, not www, even if they have a laptop at home or at work. For younger people even more so.

>But, if you show me something better, I'll gladly participate.

Better things come from people feeling the need to create them. They don't appear on their own, and people migrate to them. Else people can be stuck with the same BS for decades, centuries or millennia (consider dynasties ruling for centuries before the people of some country attempt to bring them down in favor of democracy).


This is a WEB APP https://3d.delavega.us using 3js. It can run on most iOS and Android smartphones, most Windows and macOS machines and Linux computers.

It is likely to run on over a billion devices, and no installation required. Can a non webapp or native app be better than this?



My alarm siren went off when the commentary started critiquing the “complexity” of Google docs as compared to Windows explorer circa 1998.

Complex things are often complex because the work that we do as humans is, well, complicated.

A journey map painstakingly built by an epic designer and smart person at large may design the ultimate document template that addresses every need that you are aware of. Then I come along and want something else.

When the answer is that everything is wrong, the question is usually wrong.


Your alarm shouldn't go off, because the example is very much apt. The article compared the UI offered by both, and they are indeed directly comparable.

As for the work Google Docs do, come on, they're a glorified Markdown editor, they lose in any kind of comparison with Windows 95-era Word.


Windows 95-era Word didn’t have to handle real-time collaboration over the Web between an arbitrary number of users.


Real time collaboration is an awesome feature and essentially what justifies Google Docs' existence, as it's behind Word in practically every other area (though I find Sheets more intuitive than Excel, that might just be familiarity).

The technology to do RTC is not particularly resource intensive on the client side. Nor is it web specific: the native Android versions of Google Docs don't use the web but they do support RTC.

RTC is enabled by an algorithm called "operational transform". It's a very clever algorithm that is rather tricky to implement properly, but it doesn't involve loading huge datasets or solving vast numbers of equations. It's ultimately still just about manipulating text. You could have implemented the client side part of it on Windows 95 without trouble, I'd think. At least I can't see any obvious problems with doing so, assuming a decent Windows 95 machine like one with 8 or 16mb of RAM.

OT does, however, require the entire app to be built around the concept. You can't easily retrofit it to an existing editor.

The reason Word 95 didn't have Docs style realtime editing is simply because back then networks were kind of rare, slow, crappy and word processor designers didn't know about the OT algorithm back then because it was still being researched by academia.

The real question is - if we had a better client side platform on laptops and desktops, one that supported some of the best features of the web without the rest, would Docs RTC still be possible? Surely yes!


No, it didn't. But is it so complex it requires 10x+ the resource use? I don't think so.


You can say the same about Windows 2016. Recommended RAM has gone up more than 100-fold, from 16MB to 2000MB. Developers use the resources made available to them, that has nothing to do with the web.


No one is writing web apps using javascript because they're "using the resources available" to them, in the form of powerful hardware. They're using the only TOOLS available (javascript). The problem is we just don't have a better choice, at least on the front-end.


How do two people edit a document in Windows-95-era Word?

LaunchPlan2017Q4Final4Draft1Beta.doc with Track Changes on.


Text editors are much more complex than you think.


Which is a point in favour of Word, not Google Docs.


Every generation of programmers _does_ learn from previous work, and every new platform starts from scratch learning the lessons, and incrementally evolves. A Hello World GUI on Windows 95 will require calling into a complex and undecipherable Win32 API; a Hello World on the web needs one simple line. Platforms do get frozen over time (like the Linux kernel), and people use it to build useful things with low effort. The Linux kernel is a result of incremental evolution: Linus proudly says that it's not designed.

There are severe shortcomings in all platforms that have aged. Why does power management in Linux suck so hard? Why can't we have networked filesystems by default (NFS is quite bad btw)? Until somewhat recently (~7 years), audio on Linux was a disaster: "Linux was never designed to do low-latency audio, or even handle multiple audio streams (anyone remember upmixing in PulseAudio?)". What the hell are UNIX sockets? Is there no modern way for desktop applications to talk to each other? (DBus was recently merged into the kernel). Why doesn't it have a native display engine? (X11?)

Today, it's more fashionable to criticize the web, since majority of the industry programmers endure it. Sure, there are some "simple" things that are just "not possible" with the web (everyone's pet peeve: centering). Yes, you lose functionality of a desktop application, but that's the whole point of a new platform: make what people really need easy, at the cost of other functionality. For an example, see how Emacs has been turned into a web app, in the form of Atom? You don't have to write hundreds of lines of arcane elisp, but you also don't get many features. Atom is a distillation of editor features that people really want.

I don't understand the criticism of transpiling everything to Js; you do, after all, compile all desktop applications to x86 assembly anyway. x86 assembly is another awful standard: it has evolved into ugliness (ARM offers some hope). Every platform was designed to start out with, and evolved into ugliness as it aged. We already have a rethink of part of the system: wasm looks quite promising, and you'll soon be able to write your Idris to run in a web browser.


Instead of doing

    console.alert("Hello World")
We would do (VB)

    MsgBox("Hello World")
Or maybe (Delphi)

    MessageBox("Hello World");
Only hard core C devs bothered to use Win32 directly.


Look, if we start comparing today's way of writing end-user applications to Delphi we're just going to sit here crying all the time. It was a beauty and a blessing, and I've never seen any way to develop GUI applications surpass the Delphi Visual Component Library.

Once upon a time, this was a solved problem.


I agree - nothing new. Reason: next generation of developers has to make the same mistakes as the previous generation. I mean why wouldn't they? It's not like there is any institutional memory in this profession.


there's the opposite of an "institutional memory" - kind of a continuous revolution where we must forget, repeat and forget and repeat.


Yeah. Like "anti-memory".


> Most web apps are built in languages that don't have buffer overrun problems.

The author is using "buffer" in a different sense than you are. You're thinking of a malloc'd buffer. The author is using "buffer" more abstractly, to refer to a data segment, such as a JSON or HTML string, or a string of encoded form data. His point is that that latter type of "buffer" has no declared length, and needs to be parsed in order to determine where it ends, and that as a result it is subject to problems that one can term "buffer overrun" by analogy with the traditional C scenario in which one obtains a pointer to some memory that you should not have access to.


"Most web apps are built in languages that don't have buffer overrun problems."

You misunderstood the author's point. Things like SQL injection are really equivalent to buffer overflow attacks -- data creeping into the code because of poor bounds checking.


But SQL injection isn't a thing unique to the web right? Like, SQL injection is totally a thing with c/c++ as well. Maybe focus on one problem at a time.


SQL injection is to do with SQL, a text based protocol for expressing commands to a server. Like all text based protocols trying to combine it with user-provided data immediately takes you into a world of peculiar escaping rules, magic quotes and constant security failures.

The fix for SQL injection is to work with binary APIs and protocols more. Parameterised queries are the smallest step to that world, where the user-supplied data rides alongside the query itself in separated length-checked buffers (well, assuming you're not writing buggy C - let's presume modern bounds checking languages here). They aren't combined back into text, instead the database engine itself knows how to combine them when it converts the SQL to its own internal binary in-memory representation, as IR objects.

Another fix is to move entirely to the world of type safe, bounds checked APIs via an ORM. But then you pay the cost of the impedance mismatch between the object and relational realms, which isn't great. I will provide a solution for this in part II.


"Buffers that don’t specify their length"

Most if not all webapp security problems come from an attack of servers, not clients...

It's just one of these assertions that throw a dark shadow on the whole article. But "Flux is Windows 1.0" is my favorite.


> Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.

Many programs in the 90s, especially of the simple CRUD type, were written in VisualBasic and other RAD tools, as they were known at the time, and later Java.

> Is this really a common problem in web apps? Most web apps are built in languages that don't have buffer overrun problems.

It's not buffer overrun in the "undefined behavior" sense, but rather problems relating to the need to parse text data, which can be tricky and susceptible to injection attacks.


"Many programs in the 90s, especially of the simple CRUD type, were written in VisualBasic and other RAD tools, as they were known at the time, and later Java."

And, we complained endlessly about how slow and bloated those programs were. So it goes.


I don't rememeber complaints about Delphi or VB being particularly slow.

Java apps were on the other hand slow. Ironically, today we have so many languages producing slow code, that Java is considered fast.


As an iOS developer, I would say state of web development is not true for iOS. Sure it is slowly evolved to the current state but the framework much more thought out than their web counter part.


"As an iOS developer" is another way of saying "I can't see past the walls of Apple's walled garden".

Seriously, the reactive frameworks (any really: React/VueJS/Preact/...) used in tandem with a separate state container (Redux, Vuex...) is a much better "thought out" approach to application programming than anything in the Cocoa/Swift world.


> Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.

Back then the compilers sucked. They would take complete crap of code and still it would work. They were like browsers are today. (from my experience from going through one old MUD code)

Today the song is different. Not only will the compilers warn you of many things, there's even tools for static analysis (and dynamic). So the argument that C (and even the more complex C++) is inherently insecure holds much less weight (just go run old code through a static analyzer, or a normal compiler for that matter).

That said there's only one way to write a "secure program", and that is formal verification.

People that talk with a serious tone should back up their claims, at least that's my opinion.


C and C++ are definitely not as secure as a language with automatic memory management. OOB reads/writes, type confusion, and UAF are all very real problems in C and C++.

Static analysis helps, but it can't catch everything. I work on a modern C++ codebase, and we still face all of these issues.

Formal verification is infeasible for most software projects, but they can get guaranteed type/memory safety by using a language proven to be safe. C/C++ can't give you that, but JavaScript might be able to.


Not as secure, but nowhere near the death traps as some(many?) describe them.

Things that are written in C these days are usually written in C for performance reasons. FFMPEG would not have even close to the performance it has if it was written in a memory safe language instead of C and assembly. I doubt that a magical compiler (and/or language) will appear in my lifetime that can compile high level code into performant machine code, especially when it comes to memory management. (note that C also has advantages other then performance)

JS doesn't even have a proper specification, let alone a bug-free interpreter/compiler.

EDIT: AFAIK verifying memory access is part of a formal verification, where memory is also modeled mathematically.


C and C++ simply weren't designed with safety in mind. Even with a good compiler and static analysis, security-critical bugs will slip through the net that simply wouldn't happen in other languages. It's not so much a question of whether it's possible to write safe C, but whether it's natural or easy. C is unsafe by default.


People always shit on C for security, perhaps rightly so. But I would like to point out that 99% of everything out there has C or C++ at its base. cpython is c, java is C++, rust is based on llvm which is C++. Yes implementing your user facing application in some non-c language may improve security, but you are still depending on C when you do so.

So is C the problem, or is it modern CPU architecture? C has stuck around for so long because of how close it is to assembly language. There will always be a need for a language that is one layer above assembly, and currently assembly is incredibly hard to secure.


Historicall baggage, during the 90's C and C++ were still two options among many, but like every market there is only a few products winning out.

C is close to PDP-11 and 8/16 bit computer Assembly, it has hardly any direct mapping to modern CPUs.


D


You are missing one of the main points of the author.

It is possible, in theory, to write a secure C/C++ application, however it is not even possible in theory(!) to write a secure web application.


> Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.

You know that most today OS are written in C or C++ ? Also many higher level languages are it self written in C or C++?

Write secure applications is hard and need a lot of discipline and knowledge that most developers simple do not have. Better tools can and need to help here as well as better languages. But it is still possible to write pretty secure and efficient software in modern C++. Yes it is not easy but possible.


Then prove it, list substantial codebases written in C++ that you deem secure. You'll find that it's not easy to do.


In another comment on this page, there is a developer who claims a web server was made more secure by writing it in Perl (which is written in C/C++). The original webserver was written in C.


>Every negative thing said about the web is true of every other platform, so far.

What are you basing this on? You can't put Ada, Erlang, Haskell, FORTRAN, etc in the same bucket as C or C++.


> Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure. > "Buffers that don’t specify their length"

And yet, we found good ways to eliminate the most common sources of these problems by using new languages. The web, on the other hand, is an amalgamate of several different technologies and creating a new language won't make it more secure.


C is not impossible to secure, actually. There are popular C programs which are more robust than your average high-level dynamic language program. It takes a deep commitment (hence a lack of good examples), but there is generally a clear path to a well-behaved program in C, and there's nothing about C itself which prevents you from writing secure code. On the web, you must actively mitigate pitfalls of the platform itself, in C you just have to make sure your program is itself well behaved.

You might argue either way, but a straightforward C program can be correct if it is well formulated, but a straightforward web app can not be correct unless it is fully mitigated.


Nitroglycerin is a perfectly serviceable explosive for mining purposes but there is a really good reason it is called the Nobel prize and it isn't because the folks working with nitroglycerin "lacked a deep commitment to safety". Alfred Nobel invented dynamite to create a safer explosive and his work directly improved safety (and he made a fortune in the process).

>C is not impossible to secure

Expert compiler writers and computer scientists disagree with this assertion. History seems to be on their side.

Writing "secure" C requires meticulous attention to detail at every level, intimate knowledge of undefined behavior _and_ of compiler optimization, along with the exact options passed to the compiler. It requires comprehensive reasoning about signed integer behavior and massive amounts of boilerplate to check for potential overflow. It also requires extensive data-flow analysis to prove the provenance of all values (as Heartbleed taught us) because a single mistake in calculating a length leads to memory corruption.

To put it another way: No one can write fully secure C code. It has never been done to date. All non-trivial programs written in C contain exploitable security vulnerabilities. The combinatorial explosion of complexity makes it impossible both to formally verify and to permit human reasoning about the global behavior for all likely inputs, let alone unlikely ones.


How many really truly secure C programs have ever been released into the wild? Maybe qmail? But qmail did it by completely rewriting the C standard library.


Admittedly few, but generally in native land you have the ability to plaster over platform deficiencies with equally-well-performing code. On the web, you can never really compete with the execution speed or integration of the native code in the browser, so you have to accept whatever is there.

I'd say OpenSSH (since SSH2) has a better track record than most webapps, as unfair a comparison as that is. In terms of local robustness, there's SeL4, which is also a bit unfair (since it took about a decade for a team of geniuses to prove enough properties to make it probably not very buggy).


I wouldn't consider seL4 to be a "C project". Yes, their github repository is mostly C, but the process of writing seL4 was extremely involved: write the kernel in Haskell, then write it again in C, then prove that the C is equivalent to the Haskell. seL4 is ~9000 lines of C, ~600 lines of asm, and ~200,000 lines of Isabelle (theorem prover).

I don't disagree with your use of OpenSSH as an example.


And yet the code that controls the spacecraft launch and control is written in C. I'll still agree with you that it's really hard to write good secure C code.


I'm sure you've seen this; it's posted here regularly:

http://www.flownet.com/gat/jpl-lisp.html


Totally agree, and would add that it's no coincidence that articles like these tend to conflate "web programming" with the current state of the JS ecosystem. Yes JS is kinda crazy if you don't know how to select the right tooling for the job (just like every other popular language), but the leap to the web in general - getting people to go along with the conflation - is not possible without a good deal of FUD.


Yeah, to me it sounds like a case of "grass is greener" syndrome.


"Buffers that don't specify their length"

Instead of thinking of it as buffers, you just have to encode/decode for the proper environment. Such repetitive stuff is easily implemented in stack layers.


I just find the way DOM/CSS does layout and styling to be completely convoluted and crazy compared to any desktop toolkit since 1990. Center anything either vertically or horizontally - that cannot require me to google and most importantly cannot have multiple different solutions. Simple things should not just have simple solutions, they should have one simple solution.

Memory-unsafe programs on the desktop should go the same way as the HTML layout model.


As the other commenter said, flexbox and gridbox help alleviate many issues that used to be commonly raised a few years back.

Check out Yoga [0]. It's a small layout engine based on flexbox and the CSS box moel. It doesn't cover all use-cases, but it's pretty powerful for its size. I

It's important to remember that CSS and the DOM was initially created and developed with certain kinds of documents in mind. Both are certainly quirky and missing a lot of features, but I wouldn't say they're as bad as many people make it out to be. Based on my experience with native desktop toolkits, they're all quirky in one way or another. One of the biggest issues with modern CSS is that it doesn't have sensible defaults for web apps.

Could you provide an example of your preferred approach to handling layout and styles, and talk a bit about what why you consider it superior?

What key features do you consider missing from CSS and the web?

[0] https://facebook.github.io/yoga/


What bothers me is that I can't make rules with the same expressive power as a regex.

Also css lacks properties for controlling wrapping limits and non-linear image scaling. And for some reason I always have to optimize on either width or height, I can't control both perfectly.


I don't understand the first point, could you clarify? Do you mean you wish to define your own CSS properties? You may find it exciting to learn that there's ongoing work to enable this functionality and more through Houdini [0]. You can check out a few examples at this houdini-samples [1] repo.

I'm unclear on what you mean by wrapping limits and non-linear image scaling. Could you provide an example of what you'd like to achieve?

As for having to optimize for width or height. Have you looked into display: grid? I believe it may help enable the kind of layout you're interesting in achieving.

[0] https://drafts.css-houdini.org

[1] https://github.com/GoogleChromeLabs/houdini-samples


That sounds awesome except for that fact that my customers refuse to use modern browsers that support modern CSS features. GRRR.


For those customers, I gave them an ‘application’ version that was just nw.js (basically chrome). Worked reasonably well for the customers that had somewhat recent OS on their desktops or terminal server.


Your customers sound like the smartest and most sensible people I've heard about in a while. The current way the Web is changing makes it almost impossible for in-house use in a large corporate environment. But at the same time, it's almost impossible to avoid it entirely; so instead do the smart thing and put a hard stop on it and refuse to hit the moving target.


I don't disagree. I feel like I'm caught in the middle. I personally prefer the old internet where websites could have simple design and typography and still be perceived to have value.


Gridbox and Flexbox largely make layouts sane again.


This to both. However, the problem with the web is there are old implementations that must be maintained in browsers for backwards compatibility. The issue with this is that it increases the barrier to entry for web development because it's much harder for a new person to even know what options to gravitate to.

Of course, there are books and guides to help people, but how would someone figure out which guides are worth it? There are a lot of highly rated books on the topic of web development and if you don't already know what you need, it can be daunting.

But yes, flexbox is great.


For the sake of creativity, suppose browsers were the wrong direction to take for web exploration. What do we do now?

The same idea may be applied to an operating system's ability to allow a user to operate on their machine.

Edit: It would be useful to consider why the need for a universal interface to the internet was originally sought out.


Douglas Crockford would say The Seif Project: http://seif.place/

https://www.youtube.com/watch?v=fQWRoLf7bns


Telnet? That's the alternative I can remember.


Gopher!


The old implementations are going away a lot faster than any greenfield environment can totally replace the browser.


True but it's crazy that it took 20 years to get sane layout control.


Well _it is_ crazy. We can't trace an alternate history and work with that. We work with what we have.

I think, here, we might looking at it with the wrong lens. I'm unable to find the right words to say this. Let me say this statement feels ungrateful. Web is the largest and fastest growing ecosystem of software we've right now (refer: community size, number of projects on github, say, in Javascipt, CSS, and other web technologies).

You're comparing what is what should've been. By that measure, any human activity will fall short of not only yours but anybody's expectations.

You'd think that getting sane layout control is easy, but apparently it's not. Getting a lot of humans to agree on a fast growing technology is hard (it seems like).

PS: I'm not saying "nothing could've been better, be happy with what you have", not at all. I'm just saying this seems like complaining and a better approach is to try and make it better

I needn't have written up a long tirade for such a simple statement. I see that this is the same sentiment that's espoused by several others in this thread, and thought I'd try and provide a different perspective to look at this with

edit: formatting


Fair enough. I just get sad knowing what I'm missing. I've worked with a bunch of desktop GUI builder IDEs (Visual Basic, .Net WinForms, WPF/XAML, and Qt) and I've seen the immense power they have in terms of developer productivity and application performance. Something like XAML is especially interesting because it brings the styling and responsiveness of HTML/CSS to the GUI builder paradigm. I started off with them and then have slowly transitioned to working entirely with web technologies (PHP, Rails, Angular, React, you name it). Not without reluctance for sure! It's a Faustian bargain to me -- trading off overall inferior technology and developer experience for the sheer reach and ease of deployment of the web. It's nuts to me to design a visual thing like a UI by writing lines of code. The GUI builders of yore really nailed this by allowing you to design something visual using a visual modality (drag n drop, realtime layout designers, etc.). I try to explain this folks who've only ever developed for web and usually their eyes glaze over. They can't seem to (or have an incentive not to?) appreciate the impedance mismatches and the fundamental trades being made with web user interfaces.


Your perspective is very interesting to me. It makes me think that as software gets better/easier to write, as web development has become lately, people want to question why it is becoming better/easier. I think this self-reflection we have all been doing on the web is what is causing people to post so many threads and articles with this being the topic.


Sane layout control for apps was a solved problem 15 years ago (well, for some definition of sane). Look at toolkits like Swing, GTK2, Qt, heck even Cocoa has better layout control for apps than HTML.

Flexbox is essentially an import of those concepts to CSS. There are no new ideas there.

But now flip it around and try to make a beautiful, responsive document in Swing or GTK. The layout managers that make them so great for laying out UIs won't help you much there. They can do it, they have layout managers that operate somewhat like a CSS box flow, but it won't be as natural or as easy.

So it's worth considering if it's easier to evolve HTML towards sane layout management for app-like things, or GUI toolkits towards sane layout management for document-like things.


I used vb6 20 years ago and have recently (quite unfortunately) had to learn html/js/css basics. The web is hot garbage for displaying form data compared to microsoft tools circa 1996.


The big problem is they are trying to solve different problems.

Microsoft stuff was going for fixed screen size/resolution, fixed layout, and using a quite limited set of controls.

Web browsers try to be accommodating by default - any screen size (including mobile), zoom built in, and significantly more powerful control primitives that allow enormous flexibility in the way to design things.

If you're building forms applications that only need to work on a PC, the old way was certainly easier, and in fact, Microsoft has WebForms (regular ASP.NET - not MVC or API) that is pretty similar (and doesn't horribly break down so long as you color within the lines, so to speak).

Try to imagine your vb6 app being able to scale down to window the size of a phone screen, and how the WYSIWYG editor for that works even work - I imagine it would be fair to describe it as "hot garbage" also.


Very few web apps actually use the same HTML for desktop and mobile. It's more common for WordPress templates and other document-like things, but the UI constraints on a phone are so different that it's better to create a dedicated UI for them. So I'm not sure judging VB6 by that metric is valuable.


It's not just mobile though - it's different dpi (4k screens are getting more popular), window sizes and zoom levels. Mobile is probably not a target for the forms apps the OP is talking about, but tablets may be, and different generations of various laptops and PCs are.

Web works across everything with little to no extra effort, whereas native app built with WYSIWYG UI builder is going to be constrained to certain hardware and take extra effort for handling display variations.


HiDPI was pioneered by Apple whose UI toolkits aren't particularly responsive at all.

You can certainly handle different window sizes with traditional UI layout managers. The only thing they don't do much of is totally changing the entire UI layout based on window size, and that's only because it's so rare to have a single app that's actually identical between tiny and huge screens.


Super easy, VB.NET with WPF or UWP layouts.


WPF is much closer to web development than the drag and drop WYSIWYG UI development (VB6 / WinForms) the OP was taking about. I've never done UWP but it sounds nearly the same as WPF.


WPF/UWP have exactly the same drag and drop WYSIWYG support as Windows Forms, specially when using Blend, actual components and an healthy market of companies selling them.


MS really had rapid GUI development absolutely nailed in the late 90s. For some reason we forgot all that.


I was really surprised when I went and wanted to build a GUI app to find that everyone had abandoned the WYSIWG model completely. You can't just drag your controls over, set their properties, then build the code to drive everything. You have to manually wrestle with containers and whatnot even for desktop things. I could potentially see it as an acceptable tradeoff for wide device compatibility (things with substantially different screen dimensions, etc)... but I still have yet to figure out why the layout systems of the past couldn't simply be made a little bit smart to deduce the constraints necessary to result in the same development experience as before.


I've come to believe that some number of developers actually like it the hard way. It's the only explanation that makes sense. We have gone so far backward in GUI development tools.


WPF still works nicely for prototyping in a visual editor. Declarative is much more robust than the old imperative awt/swing/winforms.


MS had nothing on Sun’s Dev Guide, the easiest and best GUI dev tool i’ve ever seen.

I say this because it took me zero effort to use due to how intuitive it was to get started...


do you have any pointers for learning about it? Looking up "Sun Dev Guide" didn't seem to find me anything related to GUI editors.


And what I don't understand is how the same people managed to get it completely wrong with WPF 10y later, with an extremely convoluted syntax, no autocomplete, poor tooling, etc


The milenials never learned it because Web is so cool!

MS tools are still here for those of us doing native Windows development.

Also the Apple and Google GUI tooling for their mobile OSes are quite good.


The problem is that if you present an average web user with the interface you can design (quickly and efficiently) in VB6, they'll spit in your face.

Much of the complexity of web design is not in the tools; it's in the fact that users don't expect a standard whatsoever, they just expect their UIs to be as slick and customly designed as magazines. If every website was written using the same standard, predefined set of widgets and components, the complexity would disappear.


This is pure illusion. Otherwise reddit, 4chan, hn, google (until 2010), craigslist, and even amazon would suffocate and go away. The fact is that what makes a web app / web site / whatever be liked by the users is the content and the value; and often times a 2005 porn pop-under is better at that than a today's chic, pedantically over-designed website with grey huge lettering, multi-MB graphics, and tonnes of wasted empty areas. They are basically like coke, blunt useless stuff with lots of sugar.


Those communities are all very niche, and in fact part of their brand and image is in their design. Even though they are less flashy, that is the point. Try to convince the owner of a clothing ecommerce site that their store should look like a 4chan bulletin board while trying to sell high priced garments to the public, or that the Coke website can't have a vibrant design in line with the rest of their branding.


So Google and Amazon are niche? And if the plainness/unelaboratedness of deaign is part if Reddit's identity, why most subreddits use elaborate custom CSS? And what I'm saying is different anyways: when you provide some real value, your design is irrelevant. Otherwise you are employing the put-moar-sugar-in-it technique of marketing. Kudos if you make it work, but it's far fetched to say that it's necessary.


The are wildly popular but they are in a niche yes. When they were not the Google we know today there were not many providing the same service or value, and potentially still isn't, so that is why they could get away with bland design. It was never bad design mind you.

Amazon is also king of providing value in their markets, and their markets are also apathetic toward flashy design. I don't need animations when I am provisioning an AWS instance nor when I am buying goods at the lowest possible price I can find.

However if I were not me, and I were shopping for luxury or boutique goods, a site that looked like Amazon would not instill me with confidence.

My point is just that the web has diverse design and UX needs and the current toolset caters for that. If someone managed to build a platform with those benefits and more and the webs market penetration then I would be on board.

I will argue still that it is necessary if you want an alternative to the web, as the alternative has to be a better value proposition for the end user not the developer.


I'd like to pose a WhatsApp commerce group as a counterexample.


A good counter! However it also helps the argument that the web is diverse enough that having a hyper flexible UI system is beneficial.


I don't understand one point here - why wouldn't Coke be able to have vibrancy? Animated .GIF images have worked very reliably for web layouts since... forever; and I don't think anyone's going to start calling for everything to be pastels online or in a fixed color scheme. I don't feel that Reddit is by any means 'niche', either. And of course a clothing store (which is an e-commerce product) should look different from an image board; again, no one in their right mind is going to demand otherwise. But they should be able to be defined by the same set of tools (personally, when I design webpages, even under WordPress, I still use HTML tables and the center tag. Heck, I've used the marquee tag in the last year.


I am a huge proponent of keeping it simple for websites. If you can achieve the same branding with less tooling then that is ace, and it is what I try to do. Less complexity means it's more maintainable and normally quicker to build.

But it will need to be the same experience that the client asked for. Coke is never going to ask you for a react website with webpack tooling and a lambda backend, they are going to come to you with some grand vision of an application that their marketing team imagined in the shower months ago and has been workshopped into a mess. You may or may not be able to deliver that with simple HTML and CSS.

I am also keen for the web to move toward some kind of stability in technology as well, the churn and wheel reinvention factory that we currently have is creating a bit of a mess but I don't think it's worth throwing the web away just yet.


My response to "grand vision" projects is to say, "work up an actual spec of what's necessary, and I'll respond based on what's technically feasible." I find that when they finally get their heads out of their entrails, most needs are very simple. Animation can be done in .gif, static images can be supported by imagemaps and tables.


I use Amazon just because it has great service. But it's UI is atrocious. Flipkart blows it apart when it comes to UI. So easy to search based on several sub parts. it's Mobile UI is also very well done.


This is a really observant point.

Users expect every website to have a unique identity (unlike anything built with WinForms), that is what creates the complexity.

If you actually use something like bootstrap, your website will look unoriginal, but it will be dead easy to make.


Indeed. The lack of easy theming (i.e. difficulty of producing a unique visual brand) is one reason why desktop toolkits lost out to the web, amongst many others.


Most enterprise web apps use bootstrap, that is quite Standart. But they do it on top of react/angular/grunt/we pack and a zillion npm packages to choose and keep updated. Nothing of this was necessary to make VB6 applications.


This is true, the monstrosity I inherited at work is built on bootstrap (2...and it was started after 3 came out..) but modern tooling has radically improved.

yarn/typescript and (though some days I hate it..it has gotten better) webpack largely make it feel sane(r).

That said getting to a point where I was comfortable with all three was insanely more complex and time consuming that picking up Delphi 6 was in the early 2000's.

Shrugs, the beast is what it is until someone does something better.


It's a pity that (classic) VB's excellent Form Designer is tied to such an ugly language.


VB6's form designer (and UI framework that underlies it) has one crucial problem: it has basically zero understanding of flexible layouts. As a result, things break as soon as you try to make an easily resizable window, or font size or family changes (even if it's something as simple as accommodating high DPI), or you localize the dialog and some strings become longer.

This lack of support for anything other than hardcoded absolute layout is exactly what made it so simple and easy to use. It's the equivalent of doing document layout by padding with spaces - it works for simple cases, and it's very easy to teach people, but it's a mess for anything even remotely complicated.


I don't think anyone is advocating vb6 forms for today's tasks, it is a 30y old technology that hasn't been updated in 20y. But its simplicity and effectiveness was remarkable and should be considered a benchmark when designing new UI tools and technologies.


> it has basically zero understanding of flexible layouts.

That's largely a non-issue to me. If I need anything fancy, I'll draw it myself. The simple stuff ought to be simple.

> As a result, things break as soon as you try to make an easily resizable window

Au contraire! It is much easier to make a resizable window when you are in full control of how nested widgets are resized along with it. That being said, some automation is fine (e.g., how MFC resizes views in response to their parent frame being resized) as long as simplicity isn't lost in the process (I'm looking at you, CSS).


CSS is incredibly simple if all you care about is absolute positioning.

It's just that nobody wants to make a Win32 style app with absolute positioning on the Web. That's because responsive apps are superior to nonresizable, manually positioned UIs.


Are they? Most 'web apps' I use have a preferable browser size; if you use them at a smaller size they still work (they are responsive) but are just unusable for anything sane. So superior... I made those layouts with Delphi too early 90s and the same consistent behavior was true then as it is now; 99% (to not get 'source?' questions; I have been writing consumer software for almost 30 years and in my experience + the experience of peers I talk to) of consumer users of software click on maximize the first instance they open anything; browser or non browser. So sure, I use tiling window managers and like different windows, but most people don't, hence the success of tablets; they are simple because 1 app, maximized at a time. And those apps sure are responsive but they don't need to be; they look the same on all tablets for the resolution they were designed to be used at. Just simply scaling them would've worked fine for most people and usecases. You would have to write things twice; one for small screens (phones) and one for big screens (desktops) but that's not really that uncommon now either.


Delphi was better in that regard, because you could anchor sides and corners of widgets to their containers. In many cases, it was sufficient to allow for a resizable layout.

But it doesn't solve the problem with high DPI, changing fonts, and localized strings being sometimes significantly longer, requiring widgets to be resized to accommodate them.


Agreed. And yes, that needs some attention, but most people doing responsive web do not account for most of that either. What does changing fonts mean? You design something for a font and then change it afterwards or?

When I click on some languages (I am not native English and my native language, Dutch, is not very high on the list of priorities for most companies) in some of the biggest companies in the world, you notice it just wasn't designed for that. From just making it wrap and enlarge to break the design to simple sticking outside the box.

For some localizations (Chinese for one) you will have to redesign anyway because 'our' (not sure how to describe) designs simply do not work/sell over there.

Most global companies have a local presence doing their local sites; I know some, even inside the EU, very big companies that have a site per country and have the html/css look 'the same-ish' for the user but completely different when you check the source to accomodate for local taste / language.

I like the dream of this working, as I am a programmer, but I don't see it in real life and I find html/css just painful to work with; not difficult but painful compared to most desktop GUI tech. Flexbox etc is changing that a bit but still it looks like people are shoehorning everything in this html5 stuff just because they desperately not want to use/learn other things instead of using the best tool for the job.

Disclaimer: I am old and have seen this before. I do create webapps and use React (new license makes it workable outside hobby projects), but I will gripe about it like the author of the blog post.


> And yes, that needs some attention, but most people doing responsive web do not account for most of that either. What does changing fonts mean? You design something for a font and then change it afterwards or?

Think about user changing the default UI font. OS X and Windows both make it difficult to impossible, and for this exact reason. On Linux, though, it's common and expected (which is probably why all UI frameworks that target it do have some decent dynamic layout support).

But aside from font family, there's also the issue of font size. That one can be cranked up on high-DPI displays, or for accessibility purposes.

> I find html/css just painful to work with

Don't get me wrong, I'm certainly not praising HTML5 and CSS here. They're vastly overcomplicated for what they do, for app development. And layouts are a long solved problem in desktop UI frameworks - Qt, Tk, Swing, WPF are just a few examples. WPF in particular is a good example of an XML-based markup language specifically for UI, and it's light years ahead of HTML5 in terms of how easy it is to achieve common things, and how flexible things are overall.

If even half the time and energy invested into building "web apps" (including all the Electron-based stuff) went into an existing UI framework - let's say Qt and QML - we'd all be much better off; developers with far more convenient tools, and users with apps that look and feel native, work fast, and with smaller download sizes (because you aren't effectively shipping the whole damn browser with them).


> WPF in particular is a good example of an XML-based markup language specifically for UI, and it's light years ahead of HTML5 in terms of how easy it is to achieve common things, and how flexible things are overall.

This is why I had big hopes in XHTML and the XML components, but then we got HTML5 instead, yet another pile of hacks.


I used to write Petzold-style Win32 apps. I've also written native Cocoa apps as recently as last month, and I've used Qt and GTK+. Having experience with all of these, my preference is still for Web apps, because of the ease of portability and the fact that TypeScript beats C++ for ergonomics, safety, and ecosystem (just having a package manager is huge, even if NPM leaves something to be desired).

I find it fun to write Cocoa apps too, and I do on occasion for throwaway stuff that only I am going to use. But too many people (including me, at home!) simply don't use Macs. When I have to write a portable app, the choices basically come down to GTK+ (doesn't look native anywhere but GNOME on Linux), Qt (requires C++ plus moc and doesn't always look native either, for example on GNOME), or writing everything from scratch for every platform. While the last choice may be the "right" one from a purist's point of view, the extreme amount of work necessary to make duplicate Windows/Mac/Linux (often plus Android and iOS) versions makes it all but out of reach for anyone but big companies.


When I started coding for Win16, my first option was Turbo Pascal with OWL, eventually I started to use Turbo C++ with OWL.

With the switch to Win32, the tools became VB, Delphi, Smalltalk and Visual C++ with MFC.

Like every Windows developer I also own the Petzold book, bought for Window 3.0 development, and other good one from Sybex, probalby the one book that ever explained how to properly use STRICT and Message Crackers introduced with WIndows 3.1 SDK.

However I might have written about five applications in pure Win32 API instead of using one of the former language/frameworks, as requirement for university projects.

In general, I think many developers only have the bare bones native experience without making use of proper RAD tooling, or the UNIX way, which has always been pretty bad in tooling for native GUIs versus Mac and Windows or even OS/2.


Qt also isn't very good for accessibility. https://blind.guru/qta11y.html


When the VB/MFC layout was replaced it was with WPF. WPF is declarative, flexible, handles high resolutions well etc.

It's what I imagine a reasonable HTML/CSS would look like.


Again, "anything fancy" here includes something as simple as a localized dialog. In most commercial apps, this means pretty much everything would require "drawing it yourself".

At which point you can basically throw the designer away, since you'll be writing code to manage layout for all widgets anyway.


> Again, "anything fancy" here includes something as simple as a localized dialog. In most commercial apps, this means pretty much everything would require "drawing it yourself".

My day job is to implement a commercial ERP system that has never been and probably will never be localized.

All software I use on a daily basis is English-only, even when localized versions to my native language exist, because:

(0) The translations are absolutely horrible. Who in their right mind would think that they are actually “helpful”?

(1) Even if the translations weren't horrible, the extra complexity simply isn't worth it. (Admittedly, my tolerance for system complexity is rather low compared to most other users.)

So, from my point of view, when you talk about localization, you might as well introduce yourself as a visitor from a parallel universe (where localization is presumably useful).


GUI toolkits moved on since the 1990's.

Go download NetBeans and create a Swing UI in Matisse. You'll find these issues aren't an issue. You can drag/drop and end up with a flexible, responsive layout that can handle things like strings changing length due to localisation. You can do the same with Scene Builder for JavaFX, although it's not as slick as Matisse. Or even Glade, if you're more a UNIX person. The latter two tools require you to understand box packing but allow for a relatively responsive layout.

The thing they don't do is let you totally change the layout depending on window size. But that's a fairly easy trick to pull off by just swapping out between different UI designs at runtime. There are widgets that can do this for you.


I know that full well. But one thing that you might note about these tools that you've listed, is that they're nowhere near as simple as the VB6 form designer, that was exalted in the comment that started this whole thread. They're more complicated, because they have to deal with dynamic layouts, and you are exposed to this overhead even in visual mode.


I guess technically you don't have to use dynamic layouts. All toolkits and designers I've seen do allow absolute positioning. It's just discouraged.

But yes, these days, people do expect windows to be always resizable and that does add some complexity.


Even Windows Forms has a layout manager with data binding, but devs have to explicit take advantage of it, there is no need for "drawing it yourself".


WinForms layout managers are a pain to work with in the designer, though. It wasn't written with them in mind - they only showed up in .NET 2.0 - and it shows. Dragging and dropping things doesn't often do what you want them to do, and sometimes things just disappear, and you have to dig them out from the control tree.

Data binding is better in that regard, but once you start doing complicated nested data bindings, it's rather tedious to do it in the designer (because you can't just bind to "A.B.C" - you have to set up a hierarchy of data sources).

Worse yet, you start hitting obscure bugs in the frameworks. Here's an example that I ran into in a real-world production WinForms app ages ago (side note: I wasn't an MSFT employee back then, so this was an external bug report): https://connect.microsoft.com/VisualStudio/feedback/details/...

Having said all that, the aforementioned app was written entirely in WinForms, using designer for all dialogs (of which it had several dozen - we used embedded controls heavily as well), with dynamic layouts and data binding throughout. And it did ship successfully. So it wasn't all that bad. Still, not the kind of experience I'd want to repeat, when I can have WPF and hand-written XAML.


>That's largely a non-issue to me. If I need anything fancy, I'll draw it myself. The simple stuff ought to be simple.

Exactly. At least 90% of the functionality of my forms-based applications use nothing more than the standard UI components Tk provided in the early '90s. Why the web of 2017 still cannot grasp this is unfathomable. To be perfectly honest, I've never seen any toolkit match the productivity of Tcl's Tk of more than two decades ago, and it's even better today:

http://www.tkdocs.com/tutorial/


There are alternatives - https://www.lazarus-ide.org/ or Delphi.


The web isn't the place to look for good tooling.

Properiatery low-code tools built over the web are a better starting point.


I prefer no-code to low-code solutions.

Good lord I hate this buzzword bingo. How the hell are proprietary low-code tools better? In what world is that a sane response?


They are simpler.


I've found that , once you build up the right set of components for yourself, you can easily get nice layouts that work on a variety of screens without much work. There sometimes ends up with edge cases, but overall it works well so long as you design things with the tooling on mind.

Meanwhile I've struggled to get things looking well with GTK+ or Tcl/tk. Especially when the UI I'm trying to make is dynamic. The tooling has never seemed very condusive to "fit content"-style UIs


> Especially when the UI I'm trying to make is dynamic.

That's where I still run into problems with CSS too. However, at some point, and not because I started using flexbox / grid, CSS did click for me and now it's mostly second nature to get the layout that I'm going for.

My feeling on this whole topic is that while as a web developer I have often thought "there must be a simpler way", every time I actually start to imagine what that would look like I end up re-imagining something similar to the web stack as it is now. There is a lot of inherent complexity to GUI-based networked client-server applications that need to be responsive, continuously integrated, database-backed, real-time, etc.


The DOM is a tree UI data structure like any other UI system. CSS is certainly...unique, sure.


>Most web apps are built in languages that don't have buffer overrun problems.

This is a very dangerous assumption. The interpreters you use have not been built with security in mind.

Go take a look at PHP changelogs for example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: