Every negative thing said about the web is true of every other platform, so far. It just seems to ignore how bad software has always been (on average).
"Web development is slowly reinventing the 1990's."
The 90s were slowly reinventing UNIX and stuff invented at Bell Labs.
"Web apps are impossible to secure."
Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.
"Buffers that don’t specify their length"
Is this really a common problem in web apps? Most web apps are built in languages that don't have buffer overrun problems. There are many classes of security bug to be found in web apps, some unique to web apps...I just don't think this is one of them. This was a common problem in those C/C++ programs from the 90s the author is seemingly pretty fond of. Not so much web apps built in PHP/JavaScript/Python/Ruby/Perl/whatever.
The security aspect was an interesting part of this piece, because one of the main reasons webapps took over from Windows apps is because they were perceived as more secure. I could disable ActiveX and Java and be reasonably confident that visiting a webpage would not pwn my computer, which I certainly couldn't do when downloading software from the Internet. And then a major reason mobile apps took over from webapps is because they were perceived as more secure, because they were immune to the type of XSRF and XSS vulnerabilities that webapps were vulnerable to.
Consumers don't think about security the way an IT professional does. A programmer thinks of all the ways that a program could fuck up your computer; it's a large part of our job description. The average person is terrible at envisioning things that don't exist or contemplating the consequences of hypotheticals that haven't happened. Their litmus test for whether a platform is secure is "Have I been burned by software on this platform in the past?" If they have been burned enough times by the current incumbent, they start looking around for alternatives that haven't screwed them over yet. If they find anything that does what they need it to do and whose authors promise that it's more secure, they'll switch. Extra bonus points if it has added functionality like fitting in your pocket or letting you instantly talk with anyone on earth.
The depressing corollary of this is that security is not selected for by the market. The key attribute that customers select for is "has it screwed me yet?", which all new systems without obvious vulnerabilities can claim because the bad guys don't have time or incentive to write exploits for them yet. Somebody who actually builds a secure system will be spending resources securing it that they won't be spending evangelizing it; they'll lose out to systems that promise security (and usually address a few specific attacks on the previous incumbent) . And so the tech industry will naturally oscillate on a ~20-year cycle with new platforms replacing old ones, gaining adoption on better convenience & security, attracting bad actors who take advantage of their vulnerabilities, becoming unusable because of the bad actors, and then eventually being replaced by fresh new platforms.
On the plus side, this is a full-employment theorem for tech entrepreneurs.
> A programmer thinks of all the ways that a program could fuck up your computer; it's a large part of our job description. The average person is terrible at envisioning things that don't exist or contemplating the consequences of hypotheticals that haven't happened.
I'm not sure programmers are much better. There's a long history of security vulnerabilities being reinvented over and over. Like CSRF is simply an instance of an attack first named in the mid 80s ("confused deputies"). And why are buffer overflows still a thing? It's not like there's insufficient knowledge about how to mitigate them.
And blaming this on the market is a cheap attempt to dodge responsibility. If programmers paid more than lip service to responsibility, they'd push for safer languages.
> And blaming this on the market is a cheap attempt to dodge responsibility. If programmers paid more than lip service to responsibility, they'd push for safer languages.
If programmers paid more than lip service to responsibility, the whole dumb paradigm of "worse is better" would not exist in the first place. As it is, we let the market decide, and we even indoctrinate young engineers into thinking that business needs is what always matters the most, and everything else is a waste of time (er, "premature optimization").
> If programmers paid more than lip service to responsibility, the whole dumb paradigm of "worse is better" would not exist in the first place.
I used to think like this but I've come to realize that there are two underlying tensions at play:
- How you think the world should work;
- How the world really works.
It turns out that good technical people tend to dwell a lot on the first line of thinking.
Good sales/marketing types on the other hand (are trained to) dwell on the second line of thinking and they exploit this understanding to sell stuff. Their contributions in a company, in general, are easier to measure relative an engineer since revenue can be directly attributed to specific sales effort.
"Worse is better" is really just a pithy quote on how the world works and it's acceptance is crucial to building a viable business. Make of that what you will.
The world doesn't always work that way though. There are plenty of areas where we've decided that the cost of worse is better is unacceptable, and legislated it into only being acceptable in specific situations. For example, many engineering disciplines.
The prime directive of code made for a company really is to increase profits or decrease costs, though. Most of the time just getting the job done is all that matters. Critical services and spacecraft code are exceptions.
Yes. Which is precisely the root of the problem. Increasing profits and decreasing costs are goals of a company, not of the people who will eventually use the software (internal tools are an exception). The goals of companies and users are only partially aligned (the better your sales&marketing team is, the less they need to be aligned).
> And blaming this on the market is a cheap attempt to dodge responsibility.
How many hacks, data breaches, and privacy violations does it take for consumers to start giving a shit?
Also, any programmer will tell you that just because an issue is tagged "security" doesn't mean it will make it into the sprint. Programmers rarely get to set priorities.
> How many hacks, data breaches, and privacy violations does it take for consumers to start giving a shit?
There's a quote by Douglas Adams pops up in my mind whenever the subject comes up:
> Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.
This is the only explanation there can be for this. Every time there's a breach somewhere (of which there obviously are plenty), there's a big outrage. But those who should go "oh, could that happen to us, too?" choose to ignore it, usually with hand-waving explications of how the other guys were obvious idiots and why the whole thing doesn't apply to them.
Exactly this.
The last company I was in had a freelance sysadmin and a couple of full time devs. The sysadmin had been banging on for ages that we needed a proper firewall set up. It was only after we thought we had been hacked (it ended up being a valid ssh key on a machine that we didn't recognize), we checked and found at least half of the windows machines were infected with crap. Only then did they get the firewall. We decided not to admit our mistake about the ssh key, as it seemed like it was the only way to get things done.
In other words, it takes a better alternative to exist. Better can mean cheaper or faster or easier, a lot of things. That can be accelerated by the economic concept of "war" (ie. any situation that makes alternatives a necessity).
I don't think it's about "dodging responsibility" but just an examination of the tradeoffs involved in development. The code we're developing is becoming more transitory, not less over time. How secure does a system that is going to be replaced by the Next Cool Thing in 4-5 years need to be? It really depends on what you are protecting as much as anything.
The incentives for someone to break into a major retailer, credit card company, or credit bureau are much different from Widget Cos. internal customer service web database. What I think the article is missing, even though it makes alot of good points, is that if there's a huge paycheck at the end of it, there will always be someone trying to exploit your system no matter how well designed it is. And if they can't hack the code quickly, they'll learn to "hack" the people operating the code.
> And blaming this on the market is a cheap attempt to dodge responsibility.
You are oversimplifying. Dunno in what programming area you work (or if it's software at all) but "we work with languages X and Y" is something you'll find in 100% of all job adverts.
Tech decisions are pushed as political decisions from people who can't discern a Lumia phone from an average Android. That's the real problem in many cases.
That there exist a lot of irresponsible programmers is a fact as well.
i disagree with one premise...web apps werent ever seen as a more secure alternative to windows apps. they were seen as easier to deploy. that was netscapes big threat to MS. You could deploy an app to a large audience easily. its hard to get across how hard things were back in the day. citrix came out as an option as well...same deal. easier to deploy.
people really thought activex was brilliant...until security became an issue. i can remember when the tide changed.
Agreed. They are easier to deploy, even multiple times per day. This is one of their selling points even today compared to native mobile applications, which have other advantages.
Another advantage is that they are inherently available across OSes, usually across different browsers (but we know what it takes.)
Finally, they used to be much more easy to develop.
I agree. Web apps were easier to deploy, centrally manage and deliver over desktop, assuming you had a stable connection. In fact it was often hard to get people to run apps on the web because internet was wither slow or ADSL was unstable. SaaS was considered risky.
The true definition of a full stack developer in those days would make today's definition of full stack faint.
You had to know how to setup hardware with an os with your software and databases, often having to run your gear in a datacentre yourself that you had to figure out your own redundancy for, all for the opportunity to code something to try out. Being equally competent in hardware, networking, administration, scaling and developing a web app was kind of fun. Now those jobs are cut into many jobs.
Activex was what flash tried to be.. The promise of Java of using one codebase everywhere.
> they'll lose out to systems that promise security (and usually address a few specific attacks on the previous incumbent
This happens in other areas besides applications as well. Programming languages, operating systems. This leads to an eternal re-invention of the wheel in different forms without ever really moving on.
Yep. Databases, web frameworks, GUI frameworks, editors, concurrency models, social networks, photo-sharing sites, and consumer reviews as well. Outside of computers, it applies to traffic, airlines, politics, publicly-traded companies, education & testing, and any industry related to "coolness" (fashion, entertainment, and all the niche fads that hipsters love).
I refer to these as "unstable industries" - they all exhibit the dynamics that the consequences of success undermines the reasons for that success in the first place. So for example, the key factor that makes an editor or new devtool popular is that it lets you accomplish your task and then gets out of the way, but when you've developed a successful editor or devtool, lots of programmers want to help work on it, they all want to make their mark, and suddenly it gets in your way instead of out of your way. For a social network, the primary driver of success is that all the cool kids who you want to be like are on it, which makes everyone want to get on it, and suddenly the majority of people on it aren't cool. For a review site, the primary driver of success is that people are honest and sharing their experiences out of the goodness of their heart, which brings in readers, which makes the products being reviewed really want to game the reviews, which destroys the trustworthiness of the reviews.
All of these industries are cyclical, and you can make a lot of money - tens of billions of dollars - if you time your entry & exit at the right parts of the cycle. The problem is that actually figuring out that timing is non-trivial (and left as an exercise for the reader), and then you have to contend with a large amount of work and similarly hungry competitors.
We started out with OS threads (I guess processes came first but whatever) and now we're trying to figure out what the next paradigm should be. It looks to me like it's Hoare (channels, etc) for systems programming and actors for distributed systems, both really really old ideas. To be fair there are other ideas (STM, futures, etc) that fill their own niches, but they either specialize on a smaller problem (futures) or they're still not quite ready for popular adoption (STM). If this is cyclical then I think we're pretty early in the first cycle.
Sure, the spotlight moves from one model to the other and back, but that's because the hype train cannot focus on many things at the same time, not because the ideas go out of style.
> So for example, the key factor that makes an editor or new devtool popular is that it lets you accomplish your task and then gets out of the way, but when you've developed a successful editor or devtool, lots of programmers want to help work on it, they all want to make their mark, and suddenly it gets in your way instead of out of your way.
Only if it is open source. Seems like Sublime Text (just an example) has avoided this effect... perhaps evidence that open source is not the best model for every kind of software?
There's a flip side to everything. In this case, if you "fixed" this problem, it would imply a steady-state world where nothing ever changed, nothing was ever replaced, and nobody could ever take action to fix the things bugging them. To me, this is the ultimate in dystopias. It's like the world in The Giver or Tuck Everlasting, far more oppressive than the knowledge that everything we'll ever build will eventually turn to dust.
Or we could get rid of humans and let machines rule the earth? Actually, that wouldn't work either, these dynamics are inherent in any system with multiple independent actors and a drive toward making things better. If robots did manage to replace humans (ignoring the fact that this is already most peoples' worst nightmare), then the robots would simply find that all their institutions were impermanent and subject to collapse as well.
Is there no possibility of steady progress without having to continually discard good solutions and reinvent things (e.g. web development catching up with the 90s)? Someone on this thread said that our field has no institutional memory. Can we at least fix that?
You run up against Gall's Law [1]. The root cause is that many of our desires are actually contradictory, but because human attention is a tiny sliver of human experience, whenever we focus our attention on some aspect of the system we can always find something that, taken in isolation, can be improved. (I'd be really disappointed if we couldn't, actually; it'd mean we could never make progress). However, the "taken in isolation" clause is key: very often, the reason the system as a whole works is often because we compromised on the very things that annoy us.
Remember that in some areas, the web is far, far more advanced than software development was in the 90s. It's not unheard of for web companies to push a new version every day, without their customers even noticing. At my very first job in 2000, I did InstallShield packaging and final integration testing. InstallShield had a very high likelihood of screwing up other programs on the system (when was the last time Google stopped working because Hacker News screwed up the latest update?), because all it does is write to various file paths, most of which were shared amongst programs and had no ACLs. So I'd go and stick the final binary on one of a dozen VMs (virtualization was itself a huge leap forward) where we could test that everything still worked in a given configuration, and try installing over a few other applications that did similar things to make sure we weren't breaking anything else.
We never did ship - we ran out of money first - but typical release cycles in that era were around 6 months (you still see this in Ubuntu releases, and that was a huge improvement on programs that came before it).
And this was still post-Internet, where you could distribute stuff on a webserver. Go back another decade and you'd be working with a publisher, cutting a master floppy disk, printing up manuals, and distributing to retail stores. You'd have one chance to get it right, and if you didn't, you went out of business.
The thing is, many of the things that made the web such a win in distribution & ubiquity are exactly the same things that this article is complaining about. Move to a binary protocol and you can't do "view source" or open a saved HTML file in a text editor to learn what the author did; programming becomes a high priesthood again. Length-prefix all elements instead of using closing tags and you can't paste in a snippet of HTML without the aid of a compiler; no more formatted text on forums, no more analytics or tracking, no more like buttons, no more ad networks (actually, I can see the appeal now ;-)). Require a compiler to author & distribute a web page and you can't get the critical mass of long-tail content that made the web popular in the first place.
You can see the appeal of all of these suggestions now, in a world where things have gotten complicated enough that only the high priesthood of JS developers can understand it anyway, and we're overrun with ads and trackers and like buttons that everyone has gotten tired of anyway, and a few big companies control most of the web anyway. But we wouldn't have gotten to that point without the content & apps created by people who got started by "view source" on a webpage.
My concern, as readers who have seen some of my other HN comments may guess, is that the next time someone starts over, they'll neglect accessibility (in the sense of working with screen readers and the like), and people with disabilities will be barred from accessing some important things. "How hard can it be?", the brave new platform developer might think. "I just have to render some widgets on the screen. No bloat!" It's hard enough to make consistent progress in this area; it would help if there were less churn.
Edit: I guess what I (very selfishly) wish for is steady state on UI design and implementation so accessibility can be perfected. I know that's not fair to everyone else though. Other things need improving too.
As someone who had to help "teach" JAWS about UI elements on a friend's computer back in '05-'07, accessibility should be the first concern. If anything, that's one upside to Google - the spider "sees" like a blind person. The better-crawled a page is, the more likely it is you won't lose massive page elements.
Selfish that, in my heart of hearts, I want what benefits me and my friends (some of them), to the exclusion of what the rest of the industry seems to pursue (churn in UI design and implementation, pursuing the latest fashion in visual design).
> Move to a binary protocol and you can't do "view source" or open a saved HTML file in a text editor to learn what the author did
I disagree with that. Using binary formats to exchange data between programs doesn't preclude using textual formats at the human/machine boundary. Yes, "view source" needs to be more intelligent than just displaying raw bytes, but that is already the case with today's textual formats. Everything is minified and obfuscated, so the browser dev tools already have to include a "prettify" option. Moving to a binary protocol would turn that into "decompile" and make it mandatory, but it effectively already is.
Requiring a compiler to author and distribute a web page is no different than requiring a web server or a CGI framework or the JS-to-JS transpiler du jour. It adds another step in the pipeline that needs to be automated away for casual users, but that's manageable. Even if the web world moves to binary formats (as WebAssembly seems to indicate), your one-click hosting provider can still let you work with plain HTML/CSS/JS and abstract the rest; just like it abstracts DNS/HTTP/caching/whatever.
> the browser dev tools already have to include a "prettify" option. Moving to a binary protocol would turn that into "decompile" and make it mandatory, but it effectively already is.
This will be a legal problem. At least in my jurisdiction, transforming source code (which is what prettifying is) is not subject to legal restrictions, but decompiling binary machine code into readable source code is forbidden by copyright law. (For the same reason, I'm concerned about WASM.)
That's not a million dollar question but one worth several 10's or even 100's of billions. If you can find the answer to it you'll push us across the hump and away from this local oscillating maximum.
> The security aspect was an interesting part of this piece, because one of the main reasons webapps took over from Windows apps is because they were perceived as more secure. I could disable ActiveX and Java and be reasonably confident that visiting a webpage would not pwn my computer, which I certainly couldn't do when downloading software from the Internet.
Indeed. And then we made sure all interesting data (email, business data, code (github/gerrit etc)) was made available to the Web browser - so pwning the computer became irrelevant.
It's indeed like the 90s - from object oriented office formats, via macros to executable documents - to macro viri - and total security failure. Now we have networked executable documents with no uniform address-level acl/auth/authz framework (as one in theory could have on an intranet wide filsystem).
So, yeah, I kind of agree with the author - we're in a bad place. I used to worry about this 10 years ago, by now I've sort of gotten used to the idea, that we run the world on duct tape and hand-written signs that says: "Keep out - private property. Beware of the leopard.".
> I could disable ActiveX and Java and be reasonably confident that visiting a webpage would not pwn my computer
Unfortunately, this is not entirely true. There were bugs in image processing, PDF processing (some browsers would load it without user prompting), Flash, video decoders, etc. IIRC even in JS engines, though those are more rare. Of course, you could go text-only, but then you couldn't properly access about 99% of modern websites.
When there would be a bug in PDF processing, you end up with a RCE, right?
But downloading an EXE is basically allowing arbitrary code execution on your machine no matter what. So _even with the security bugs_, webapps are basically safer than installing a native app on desktop, at least in its current state.
I see your point though. There are still a lot of entry points we need to be careful about
The Javascript security model breaks down in the case of file:///, no overflows are required. The security you get today is more flimsy than you probably think. And it used to be far worse.
> "Web development is slowly reinventing the 1990's."
> The 90s were slowly reinventing UNIX and stuff invented at Bell Labs.
Yes, this reminds me of: "Wasn't all this done years ago at Xerox PARC? (No one remembers what was really done at PARC, but everyone else will assume you remember something they don't.)" [1]
> "Buffers that don’t specify their length"
> Is this really a common problem in web apps? Most web apps are built in languages that don't have buffer overrun problems. There are many classes of security bug to be found in web apps, some unique to web apps...I just don't think this is one of them. This was a common problem in those C/C++ programs from the 90s the author is seemingly pretty fond of. Not so much web apps built in PHP/JavaScript/Python/Ruby/Perl/whatever.
Most injection attacks are due to this; if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately.
> if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately.
That's not really the problem. The problem is there is no distinction between data and control leading to everything coming to you in one binary stream. If the control aspect would be out-of-band then the problem would really go away.
Length prefixes will just turn into one more thing to overwrite or intercept and change. That's much harder to do when you can't get at the control channel but just at the data channel. Many old school protocols worked like this.
This is the important takeaway here. Changing the encoding simply swaps out one set of vulnerabilities and attacks for another. Separating control flow and data is the actual silver bullet for this category of attacks.
Unfortunately, there’s rarely ever a totally clear logical separation between the two. Anything you want to bucket into “control”, someone else is going to want the client to be able to manipulate as data.
I'm having a hard time seeing how having separate control and data streams would have an effect here. Using FTP to retrieve a document isn't more secure than HTTP... the problem is in how the document itself is parsed. If you added a separate side channel for requesting data (a la FTP), you'd still have the issue of parsing the HTML on the other side.
Granted, if you made that control channel stateful, you'd make a lot of problems go away. But you could do that with a combined control/data stream too.
What am I missing? How would an out-of-band control channel make things easier?
That said, I think many issues with the web could be solved by implementing new protocols as opposed to shoehorning everything into HTTP just to avoid a firewall...
It makes sure that all your code is yours and that no matter what stuff makes it into the data stream it will never be able to do anything because it is just meant to be rendered.
So <html>abc</html> would go as
<html><datum 1></html> where datum 1 would refer to the first datum in the data stream, being 'abc' and no matter what trickery you'd pull to try to put another tag or executable bit or other such nonsense in the datum it would never be interpreted. This blocks any and all attacks based on being able to trick the server or eventual recipient browser of the two streams to do something active with the datum, it can only be passive data by definition.
For comparison take DTMF, which is inband signalling and so easily spoofed (and with the 'bluebox' additional tones may be generated that unlock interesting capabilities in systems on the line) and compare with GSM which does all its signaling out-of-band, and so is much harder to spoof.
The web is basically like DTMF, if you can enter data into a form and that data is spit back out again in some web page to be rendered by the browser later on you have a vector to inject something malicious and it will take a very well thought out sanitation process to get rid of all the possibilities in which you might do that.
If the web were more like GSM you could sit there and inject data in to the data channel until the cows came home but it would never ever lead to a security issue.
No amount of extra encoding and checks will ever close these holes completely as long as the data stays 'in band' with the control information.
I guess what I'm getting at is that it isn't HTTP that's the issue -- it's HTML. I'm all for a control channel in HTTP. But you're still stuck parsing <html><datum_1></html>, and it is difficult to think about reorganizing each tag as a separate datum. At what level do you stop converting the data into separately requestable bits? How would you even code it? And making the tags themselves length-prefixed (like csexp's) wouldn't entirely solve the problem.
I could easily see making <script> and <link> resources required to be separately requested (like images are now -- ignoring data/base64 resources), but we're back to redefining HTML.
I'm not arguing against that...
It's really hard to have these types of debates though, because everyone focuses on different problems of the HTTP/HTML webapp request/response cycle. Like you said, adding separate control/data channels would help, but that doesn't solve SQL injection attacks (which is a whole other class, but that's not really an HTTP/HTML issue, it's a backend issue and I don't see how you'd avoid that with a simple protocol change). Simply making HTTP stateful could potentially solve a different class of session highjacking, etc...
There are so many attack vectors that I think it does make sense to think about what a replacement for HTTP/HTML would look like. Most of these problems arise from trying to re-engineer a document format (HTML) to support interactive webapps. We should think about how to do this better... (without recreating ActiveX -- shudder).
> I could easily see making <script> and <link> resources required to be separately requested (like images are now -- ignoring data/base64 resources), but we're back to redefining HTML.
This has been implemented in HTTP (not HTML); you can enable the requirement right now by serving your pages with an appropriate Content-Security-Policy header.
SQL injection attacks are an excellent example where code and data are mixed. One solution is to do a lot of clever escaping of 'attackable' characters that instruct the DBMS to stop treating a character string as data and start executing things [1]. Escaping attackable characters attempts to partition data from code. This usually works but not perfectly.
Or, run your data through stored procedures instead. It took me a while to figure out why stored procedures were so much more secure than regular queries. I finally figured out it was because a stored procedure does exactly what the grandparent post says: It treats all inputs as data with no possibility to run as code.
Hmm. I'm going to have to disagree about Stored Procedures providing security. You can do all sorts of bad things using stored procedures that may result in unintended code execution!
I think they're more useful for organization and abstraction than security. Then again, a well organized and smartly abstracted system can lead to better security!
But I think bind parameters are probably a better example of security.
Binding effectively separates the data from the logic. So you define two separate types of things, and then safely join those things together by binding them. It doesn't matter too much whether that happens in the application making a call to the database or in the database in a stored procedure. Obviously this same concept can be applied at many different points along the application stack. The analogous concept in the UI is templating. You define a template and then safely inject data into that template.
> I finally figured out it was because a stored procedure does exactly what the grandparent post says: It treats all inputs as data with no possibility to run as code.
This isn't well defined. Take this pseudocode stored procedure (OK, it's a python function):
You can provide any input to that. You could think of this as a function which "treats all input as data with no possibility to run as code" (it never calls eval!). But you could also usefully think of this as defining a tiny virtual machine with opcodes 1 and 2. If you think of it that way, you'll be forced to conclude that it does run user input as code, but the difference is in how you're labeling the function, not in what the function does.
The security gain from a stored procedure, on this analysis, is not that it won't run user input as code. It will! The security gain comes from replacing the full capability of the database ("run code on your local machine") with the smaller, whitelisted set of capabilities defined in the stored procedure.
> The security gain comes from replacing the full capability of the database ("run code on your local machine") with the smaller, whitelisted set of capabilities defined in the stored procedure.
The security gain is that it you are only able to run queries that the DBA allows you to. If you can't write arbitrary queries, you won't get arbitrary results. If you can only run a stored procedure, you are abstracted away from those side effects. Another way of saying this -- the security risk is shifted from the app developer to the DBA. Someone is still writing a query (or procedure code), so there will always be some risk.
The security gain is that it you are only able to run queries that the DBA allows you to. If you can't write arbitrary queries, you won't get arbitrary results. If you can only run a stored procedure, you are abstracted away from those side effects. Another way of saying this -- the security risk is shifted from the app developer to the DBA. Someone is still writing a query (or procedure code), so there will always be some risk.
This could also be achieved with a well written microservice/package that developers go through without depending on dba.
The philosophy and semantics are an interesting side issue, but I'd say the default meaning of those words is that your data, in the SQL system, is not treated as SQL code.
Stored procedures are bad in so many ways - they harder to deploy and revert than code, harder to unit test* , harder to refactor and every implementation that I have ever seen that has business logic in stored procedures instead of microservices/packages/modules have been a nightmare to maintain.
* At least with .Net/Entity Framework/Linq you mock out your dbcontext and test your queries with an in memory List<>
Disagree. I've implemented unit tests that connect to the normal staging instance of our database, clone the relevant parts of the schema into a throw-away namespace as temporary tables, and run the tests in that fresh namespace. About 100 lines of Perl.
That was five years ago. These days, it's even easier to do this correctly since containers allow you to quickly spin up a fresh Postgres etc. in the unit test runner.
It’s even easier and faster when you don’t have to use a database at all and mock out all of your tables with in memory lists. No code at all except your data in your lists.
It also need not be correct. If you're only ever doing "SELECT * FROM $table WHERE id = ?", you're fine, but a lot of real-world queries will use RDBMS-specific syntax. For example, from the top of my head, the function "greatest()" in Postgres is called "max()" in SQLite. How is it called in your mock?
Mocking out tables with in-memory lists adds a huge amount of extra code that's specific to the test (the part that parses and executes SQL on the lists). C# has this part built in via LINQ, but most other languages don't.
By the way, I see no practical difference between "in-memory lists" and SQLite, which is what I'm currently using for tests of RDBMS-using components, except for the fact that SQLite is much more well tested than $random_SQL_mocking_library (except, maybe, LINQ).
You are correct, if I were doing unit testing with any other language besides C#, my entire argument with respect to not using a DB would be moot. But I would still rather have a module/service to enforce some type of sanity on database access.
The way that Linq works and the fact that it’s actually compiled to expression trees at compile time and that the provider translates that to the destination at runtime whether it be database specific SQL, MongoQueries or C#/IL, does make this type of testing possible.
Yeah, I thought the same thing until I found a colleague who was very fond of calling exec_sql in stored procedures, with the argument being a concatenation of the sp arguments.
> if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately.
If this was the case, it would be near-impossible to write HTML by hand. And if you're writing HTML with a tool (React, HAML etc.), the tool could be doing HTML escaping correctly instead. This isn't an issue with HTML, it's an issue with human error.
> This isn't an issue with HTML, it's an issue with human error.
All security issues are due to human error. Those are solved by building better tools.
> If this was the case, it would be near-impossible to write HTML by hand.
If, besides the text form, there would be a well-defined length-prefixed binary representation, we could simply compile HTML to binary-HTML, which would immediately made the web not only safer, but also much more efficient (it's scary if you think just how much parsing and reparsing goes on when displaying a web page).
If you have an issue with human error and don't design your programmed tool to avoid letting the errors out into the world, then it is the fault of the tool.
I'm not sure what the argument you're putting forth is. All of the HTML-generating tools I'm aware of (barring dumb string templating tools) work sufficiently well and prevent human error.
My point is that there's nothing wrong with HTML. HTML isn't a tool, it's a format for storing and transmitting hypertext. If you're using React or HAML or any of the other HTML-generating tools, you're effectively immune from XSS. I'm putting forth that developers aren't using effective tools (shame on every templating engine that doesn't escape by default), and that calling the web as a platform bad is a bit nonsensical. It's like saying "folks are writing asm by hand and their code has security issues, therefore x86_64 is insecure".
The prevalence of XSS suggest that the web ecosystem has failed to produce the sort of tools you suggest. If such tools actually existed and were good, people would use them and web app exploits would be a curiosity rather than an expectation.
However, no such tool exists. I think there's a deeper issue here: the sheer number of ways you can generate XSS alone, even ignoring the other exploit types, is far beyond what any tool is capable of stopping. Look at one of the XSS holes found by Homakov that I linked to from my article:
The XSS occurs on this line of JavaScript, not HTML:
$.get(location.pathname+'?something')
That's a simple line of JQuery that does an XmlHttpRequest to the same page that was loaded with an additional parameter. By itself, it is not an XSS. But if the backend is/was running Ruby on Rails (presumably some old version by now) then it could turn into an XSS due to a combination of features that all look superficially harmless.
Show me the tool that would have avoided that type of exploit, without already knowing about it and having some incredibly specific hardcoded static analysis rule.
When I argue that the web is unsafe by design, it's because cases like that aren't rare, they're common. To paraphrase Veekun, scratch the surface of web security and you'll find yourself in a bottomless downward spiral, uncovering more and more horrifying trivia.
> If such tools actually existed and were good, people would use them and web app exploits would be a curiosity rather than an expectation.
I think you're missing another two obvious explanations:
1. Lack of education when picking a tool (copy paste from bad SO answers is a frequent source of bad code).
2. Developers don't care. If it works, why bother wrapping your head the rest of the way around to understand why it works or whether it's secure?
> By itself, it is not an XSS. But if the backend is/was running Ruby on Rails (presumably some old version by now) then it could turn into an XSS due to a combination of features that all look superficially harmless.
Sure, ERB before RoR essentially had security turned off by default (as I noted). And this issue could happen with any other non-web system, turning into any other kind of vulnerability. This isn't a web problem, it's a system security problem. Bad inputs in a "native" app could lead to security issues in the output of apps on other devices. Badly implemented binary data decoders in a desktop application could do far worse than a XSS in the browser.
This problem is misattributed as a "web problem" because there are far more complete systems on the web than there are on nearly any other platform. It's like the tired argument that Mac is more secure than Windows, but Windows has historically had an overwhelmingly outsized market share, making OS X issues far less valuable to attackers.
> When I argue that the web is unsafe by design, it's because cases like that aren't rare, they're common.
I don't disagree that these issues are common, but I disagree that the web is unsafe by design. The web is a platform. If everyone wrote their Python APIs without a framework, I can guarantee they would be littered with security holes. If everyone wrote their own text renderer in C++, just displaying strings on the screen would be a dangerous task.
There are good tools that make it really hard to fuck up on the web. Seriously, try to accidentally have a XSS vulnerability in an isorendered React app with Apollo. The problem is folks that want to jQuery-jockey their way across the finish line and don't understand that they are making terrible mistakes.
I think it's easy to blame developers for the failings of their tools and just say, well, they should be more educated or more serious. That'd be great, but there are too many problems with the web to educate users on how to avoid them. Even skilled developers can't reliably avoid every minefield. Look at the attacks by Homokov that I linked to, or read up on HEIST, or cross site tracing, or SSRF attacks.
How many developers do you think might have written a web server in their time, or will do in the next 10 years? And how many know will pass URL components straight through to glibc for resolution, as is the obvious way to do it, and create an exploitable SSRF vuln on their network? How many developers will have even heard of this type of problem?
New ways to exploit weird edge cases and obscure frameworks crop up constantly - it is a full time job even to keep up with it all. At some point you can't blame people walking through a minefield because they keep getting blown up. The problem is the mines.
this issue could happen with any other non-web system, turning into any other kind of vulnerability. This isn't a web problem, it's a system security problem.
That's just not the case, sorry. Have you ever actually written desktop apps that use binary protocols? It's a web problem:
• It relies on the over-complex and loose parsing rules for URLs
• It relies on unexpected behaviour in one of the most popular web libraries
• It relies on bizarre and unexpected behaviour in XmlHttpRequests
• It relies on the fact that web apps routinely import code from third party servers to run in their own security context.
I have been programming for 25 years and I have never seen an exploit like that before in managed desktop apps using binary protocols to a backend.
Seriously, try to accidentally have a XSS vulnerability in an isorendered React app with Apollo.
An isorendered React app with Apollo? I think that may be the most web thing I've heard all week ;)
"Most injection attacks are due to this; if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately."
How so? If you allow the user to send arbitrary data, and your handling of that data is where the problem lies, it isn't going to matter whether the client sends a length-prefixed piece of data. You still have to sanitize that data.
HTML, and whether it uses closing tags or not, is pretty much irrelevant to the way injection attacks work, as far as I can tell. Maybe I'm missing something...do you have an example or a reference to how this could solve injection attacks?
If the length is not pre-defined, the input has to be parsed to look for the closing tag. That makes your code vulnerable if the input tricks it into finding the wrong closing tag. But if the length is fixed, you don't have to parse it at all. That would avoid a whole class of vulnerabilities.
A simple example could be the Twitter API's handling for references (URLs/hashtags/at-user mentions) in a tweet [0]. The tweet text is returned in one field, and all references are listed in a different field together with first/last character index within the tweet where that reference was found. You don't need to parse the tweet text yourself, just display it as plain text and insert links where the references say you should.
This isn't some theoretical design. Any native application that uses a binary protocol framework like protobufs over TCP to communicate with the backend will benefit from this approach.
If you can say, “the next 450 characters are plain text and should be rendered as such”, then even if the text includes script tags (or whatever), they won’t be parsed or executed.
This seems like an argument for strong types. Which is reasonable. But, one could do that with closing tags, too. We already know that relying on a programmer to specify the length of data is prone to bugs (C/C++). And, you can't trust the client to specify the length of data.
I feel like this is conflating two different problems and potential solutions.
I'm not saying injection attacks aren't real. I'm saying that whether HTML uses closing tags or not is orthogonal to the solution. But, again, maybe I'm missing something obvious here. I just don't see how what you're suggesting can be done without types and I don't see how types require prefixing data size in order to work.
> Most injection attacks are due to this; if html used length-prefixed tags rather than open/close tags most injection attacks would go away immediately
No it wouldn't. It wouldn't fix sql injection and it also wouldn't fix the path bug the op linked.
The problem is not length, it is context unaware strings. The problem is our obsession with primitive types that pervade our codebases.
SQL injection is not a web problem. If you create SQL queries based on any untrusted (e.g. user) input on any platform, you have to escape/explicitly type your input.
Injection in general is simply a trust problem. If you can trust all inputs fully (hint: you can't, because nobody can), then you will never have an injection attack.
SQL injection is a problem with SQL, which is similar to problems with HTML. SQL was created as human-friendly query languages, it wasn't created to be built from strings in a programming language. Proper database API should be just a bunch of query builder calls and with this API SQL-injection is not possible.
SQL injection is a problem with incompetent developpers. Most languages have simple constructs to make them immune to injections, like parameterized queries.
If you are exposing code to an untrusted, hostile environment (which is pretty much the web), no language that does anything useful will protect you against not caring about security.
Not all queries can be parameterised - I'm not aware of any DBMS that allows for the parametiersation of identifiers (e.g. table and column names) or variadic operators and clauses (e.g. IN() and optional predicate clauses), this is why "Dynamic SQL" is a thing - which comes with the inherent risk of SQL Injection.
There are many reasons to create SQL dynamically, but I can't think of a good reason for the table name to come from the client.
Even if you absolutely need to inject a string in a sql query, sanitizing it is trivial. In .net / MS SQL, a simple x = x.Replace("'","''") does the trick. For any other common data type, strong typing should be sufficient to prevent any injection.
The point is that if you know the length of some data up-front before starting to parse it, you don't have to inspect the data in any way to see when it ends. This means that you don't need to know what the SQL injection looks like and protect against it, or what JS looks like to sanitise your inputs – the problem does go away to a large extent.
Obviously nobody is going to be typing length prefixes manually, so our tools are going to do it for us.
Now we're back where we started where you accidentally inline user content as HTML, except now HTML has the added cruft of someone's HN comment solution.
This doesn't do anything for Bobby DROP TABLE injections, right? The whole thing is a user-supplied slug, there's no source of truth on how long a user's name is. Or am I missing something?
Bobby tables would be considered data. Or should be. And hopefully it would be obvious that it doesn't belong in the code section.
But like you I'm not totally convinced. I think this idea would make it easier for people trying to do the right thing to get it right; but for the blissfully ignorant? Might not help at all. Either way it needs a more flushed out spec.
then there is no value you can put in user_name that will let you escape the function call.
Length prefixes are one way of working this, but only scratch the surface of the issue. As others have pointed out, it's also the fact that the control elements are inline with the data.
<p:25><script:14>somethingBad()
Will still run somethingBad(). You are at least sandboxed to the containing element though, so restricting certain elements to only appear in parts of the HTML tree could prevent this (e.g. if all scripts were disallowed in BODY then merely constraining user-generated content to the BODY would work; right now you could still get hit by someone including </body> in their content.
Even when the sender tells you the length of the data to expect the receiver still needs to read every thing that is sent?
Or were senders always going to send true values for length and data?
Really, you can't trust any sender, so the data should be validated anyway.
There's been known attacks where a sender says here's 400 bytes and the receiver stupidly trusted that length specifier, and the sender's sends more (or less) crafted bytes and BOOM!
Known good data start and end specifiers, which HTML has, seems a good answer when dealing with untrusted senders (read:everyone)
This might be the biggest dichotomy I've yet seen on HN. An opinion piece voted all the way to the top of the front page (with a clickbaity title, might I add), yet the top comment soundly debunks the article's arguments.
Yeah, this is why everybody clicks on the comments link first.
Mike Hearn (the author of the original article) is a bright dude, who is well-known in several tech circles, which may explain the high ranking for the post here on HN.
I'm not intending to dismiss him outright; he may have an interesting follow-up. I guess I'm just much more optimistic about the web than he seems to be, and more critical of everything that's come before than he seems to be. I think Mike is about the same age as me, and probably has a similarly long history in tech, so I can't really pull the "hard-earned wisdom and experience" card in this conversation. I think I just disagree with him on this, and that's not a big deal.
One of us might be right. (But, I think betting against the web is crazy.)
We can assume that many HN readers are closely related to Web programming. Either they do it themselves or their wage gets paid because their employers' business depends on Web apps.
If the article is right that it is close to impossible to hire a Web developer that understands all Web security issues and knows to mitigate them, it does not come as a surprise that there is fierce criticism to the article. It basically says you are doing a hopeless job and your employers' business model is flawed.
I'm not a Web developer, but I find the article very convincing. From what I follow headlines Web programming changes very quickly and the frameworks change all the time. Meaning that smart people are not happy with what is available, writing new stuff. Yet I don't think security has been the primary driver for any new framework. They are still parsing text. So let's see whether the author has any fundamentally different approach in his next post (if anybody remembers to read it)
Disclaimer: I work in embedded and our company advertises to be very secure. I know that our security sucks.
The author definitely has valid arguments about web's security but I think the rest of his arguments are all lazy, anecdotal and not accurate. Comparing Google Docs to an old version of Office for example. They are incomparable firstly because they are running on completely different platforms. Office would take a long time to install while Google Docs are available almost instantly, they can be updated almost instantly and secondly include many more benefits that come with being part of the web.
I have myself developed GUI application using author's beloved C++ and Qt and I can admit its a far better designed and convenient experience compared to the web, but it's hardly possible to achieve the same amount of flexibility in UI/UX design that is available on the Web. I think the fact that things are changing so fast, standards are badly designed (at least initially) and there are so many inconsistencies are all only because web is a fast moving platform that requires the consensus of many players to happen and move forward. Also the amount of commercial interest and developers working on the web is incomparable to other platforms, hence the fast moving nature.
> flexibility in UI/UX design that is available on the Web
If you take advantage of that flexibility to create a UX that's very different from the standard widgets, it's likely to be inaccessible to blind users with screen readers. Check out this rant on HN from a blind friend of mine (a few paragraphs in for the part that's most relevant to this thread):
As far as I know, the most accessible cross-platform UI toolkit for the desktop is SWT. It uses native widgets for most things, and actually gasp implements the host platforms' accessibility APIs for the custom widgets. But, I can hear it now, somebody will say they hate SWT-based applications because they wreak of Windows 95. Oh well, fashion trumps all, I guess.
The author definitely has valid arguments about web's security but I think the rest of his arguments are all lazy, anecdotal and not accurate. Comparing Google Docs to an old version of Office for example. They are incomparable firstly because they are running on completely different platforms. Office would take a long time to install while Google Docs are available almost instantly, they can be updated almost instantly and secondly include many more benefits that come with being part of the web.
But even Google knew not to depend on the universality of web apps on mobile - they have native apps for both Android and iOS. Aren’t we already at a tipping point where most web access is done on mobile devices?
To somewhat counter all the negative comments here - I read this article and agree pretty much 100% with every sentence in it. There are probably more people who agree with the post - hence the upvotes.
Yes, I agreed with the entire article as well. I didn't see anything controversial or exaggerated about it.
Edit: Ok, maybe I could have predicted that lines like "HTML 5 is a plague on our industry" would ruffle some feathers. I guess I like a little snark in my criticism.
This kind of well thought out constructive criticism leads to interesting discussion and eventually improvements, even if I don't necessarily agree with it - hence the upvotes. Dissent should be welcomed, especially when it's in a well-meaning tone.
Being the top comment means only it has more recent upvotes than other top-level comments, not that it has some special meaning that should be taken to have more meaning than the article. Back when they were still displaying points, you could see that the time of the upvote mattered almost as much as the actual upvote itself - meaning the comment with the most upvotes was not always the top comment.
FWIW, I'd take tight control if it was in pursuit of humanitarian values, such as accessibility for people with disabilities, rather than a company's bottom line. The chaotic freedom of the Web isn't very good for accessibility. Yes, yes, accessibility is possible, but in practice, very often it doesn't happen. See this rant on HN from a blind friend of mine (yes, the same one I posted elsewhere on the thread, but it drives the point of this comment home):
Yep, also on speed: it seems to me that the microsoft office suite for instance slows down every generation despite only having minor improvements and not actually being that different now than from 95. The nature of developers is that they will use whatever resources that they have. Faster computers don't necessarily mean faster applications but faster software development cycles from bigger teams with less need for the discipline and rigor that was required before.
Software has become more increasingly complicated over time. Aside from adding new features, many companies have improved their efforts of providing accessible applications to a international audiences.
Let's not forget we've drastically increased security by writing applications in safer languages.
Oh, and newer applications tend to support a far wider variety of devices types, displays, inputs, etc.
Developers definitely be investing a lot more effort into improving the status-quo, but it's unfair to claim stuff is slower without improvements.
> "the microsoft office suite for instance slows down every generation despite only having minor improvements and not actually being that different now than from 95"
I can't comment on most of the Office suite, but Excel evolved quite a bit since 95. Tables, PowerBI, Apps for Office, etc... If your needs are basic enough then even VisiCalc will do the job, but new features do make an impact for more demanding users.
That's not the point though. The example given in the article was Google Docs which has the same UI paradigm to Word. Under the hood it's massively different obviously with real time collaboration and constantly up to date syncing.
So, the reasoning is that UI is fundamentally the same (or worse if not done right) to native UI from the 90's, yet it hasn't had a massive speed increase which seems wasteful.
But modern UI in Office is only an evolution of what was there in the 90s and hasn't changed fundamentally either yet it doesn't feel any faster.
UI is only a small part of an app, a well designed app will have most of the work performed outside of the UI thread and it shouldn't feel any slower than a native implementation. My thoughts are rendering speed isn't the issue but application design.
> But modern UI in Office is only an evolution of what was there in the 90s and hasn't changed fundamentally either yet it doesn't feel any faster.
Sure, and Office in the 90s didn't feel any faster than the word processing I was doing on an Apple II+ in middle school. This is because the people buying (and building) software care about other things than processor efficiency. If it's generally fast enough for their normal use, they won't switch to a competitor.
The notion of "wasteful" here is in terms of something like RAM usage or processor instructions. But the correct measure is user time, including the number of user hours of labor needed to buy the device. The original Apple II cost 564 hours of minimum wage labor, and you were up over 1000 hours if you wanted a floppy drive and a decent amount of RAM. Today, a low-end netbook costs 28 hours of minimum wage labor.
Suppose you managed to put on that netbook something with the efficiency of Apple Writer or Office 4.0. Would anything be better? No, because the spare cycles and RAM would go unused. They would be just as wasted. No significant number of user hours would be saved. Or, alternatively, the in-theory cheaper computer they could buy would save them very few working hours.
As long as the user experience is as good, then the hardware notion of "wasteful" is a theoretical, aesthetic value, not a practical one.
You are ignoring battery life which is a useful consideration on laptops which appear to be the majority of pcs.
You are also ignoring the notion that a user may want to run a variety of apps, and not want to close or have any of the lot swapped out and pretending the hit on performance, resources, and battery life isn't cumulative.
I'm not ignoring them. I just didn't mention them in this comment. They fit in the same rubric.
A user can run a few things even on the low-end netbook. Tabs are cheap. And if they hit the limits of their machine, they can either pay in a reasonable number of user-minutes to actively manage resources or a modest number labor-hours to get something beefier.
I personally would like to see things better optimized. After all, I started programming on a computer with 4K of RAM. But I recognize that there is very little economic incentive to do so.
Isn't it kind of offensive to suppose that billions of users should pay more money so that hundreds of developers can use less efficient tools to build apps?
If those are the only factors and the numbers fall in particular ranges, sure. Otherwise, no.
Try doing the math here. How much cheaper would a netbook get if every single developer coordinated to reduce RAM and CPU usage? $5? Maybe $10? Looking at market prices, old RAM and CPUs are cheap. They consume basically the same physical resources as new RAM and CPUs, so price competition for not-the-best hardware is fierce.
Now ask those people if they'd pay $5 or $10 more for assorted new software features. Any features they can think of. And keep in mind that in that price range, people are paying $10 more to pick the color of their computer.
So sure, it offends me a little, because I like optimizing the things I pay attention to, like RAM usage. But if instead I optimize for the sorts of the things users care about, especially as reflected by what they'll actually pay for it becomes pretty clear: users don't care about the things I do.
So then the moral question becomes for me: who am I to impose my aesthetic choices on the people I'm trying to serve?
Trivializing making bad software that is slower on devices orders of magnitude faster by trying to equate it to netbook prices is a particularly bad methodology of comparison.
This is especially true as people are promoting everyone moving to a platform that is substantially worse.
How about getting more performance and battery life out of the same machine which effects more than netbook users.
I am not trivializing anything. I don't like bad software any more than you. However.
You may have noticed that we are in the technology industry. That means the final measure of our work is economic. The final judges of our work are our customers.
If you believe that X is better in our industry, you must be able to demonstrate that betterness in terms of user economics, in terms of user experience. You haven't yet, and you seem unwilling to even grapple with my argument in those terms. Are you planning on trying?
I dunno. When I was overseas I had a Kindle which lasted for something like two weeks between charges; that was awesome. Much better than my laptop which I had to charge every day for hours.
I wouldn't mind a true low-power laptop which only needed a charge twice a month.
Eink displays only use however much the battery inherently loses when not changing pages. If you only read 500 screen of text that month then it only consumed a trickle of battery x 500. Your screen itself consumes power every second its on and you also ask much more than rendering text.
What you propose is interesting though none the less. What is the most battery life that can reasonably be packed into a device that is modest but still useful.
Sure, but that won't come from people programming differently. The laptop backlight alone is a few watts. If your battery is 40 watt-hours, you're not going to get to 2 weeks of usage no matter how little the CPU gets used.
Yes, so it's pointless for the author to say that a problem with Web Apps is that they're slower than native apps. It's redundant now days and a well designed web app using modern techniques should not feel any slower to an end user than a desktop app, in fact with the advanced rendering engines within modern web browsers they can feel more responsive and more usable than native.
> "But modern UI in Office is only an evolution of what was there in the 90s and hasn't changed fundamentally either yet it doesn't feel any faster."
Evolution of a UI isn't as important as evolution of the features the UI exposes. As for whether it feels any faster, depends on what you're doing. To give an example, Excel functions can be calculated using multiple CPU cores, which AFAIK wasn't a feature of Excel in the 1990s. You'll only see that speed up if you've working with a large enough volume of formulas. Measuring speed by UI speed alone doesn't get you very far.
All that being said, you won't find me disagreeing with the fact that desktop apps are bloated (web apps even more so). I've experienced responsive desktop apps running on a 7.14MHz CPU. The fact that we've thrown away most of the hardware improvements since the 1980s should be clear to anyone paying attention.
That's precisely the point. The author of the article was complaining that web applications are slow and compared it to Windows 95.
And my point is that web apps have a lot of features that didn't exist back then, and because of feature additions Office and other native applications don't exactly feel snappy either.
That was the general point, but I was responding to a side comment that I disagreed with.
> "because of feature additions"
Adding features does not require slowing an application down. The reason modern apps (desktop and web) are slow is to do with inefficient use of computing resources, which has very little to do with available features.
> UI is only a small part of an app, a well designed app will have most of the work performed outside of the UI thread and it shouldn't feel any slower than a native implementation. My thoughts are rendering speed isn't the issue but application design.
Can you run web apps in a multithreaded environment? UI remains the largest overhead in a web app in my opinion..
Or, how much speedup would you estimate, if we convert all GoogleDocs functionalities into Word97? I'd estimate 1000 times. :) Or perhaps, the computation power for drawing a cursor alone will far exceed the whole Word97.
> Can you run web apps in a multithreaded environment? UI remains the largest overhead in a web app in my opinion..
Yes, you have webworkers for multi threaded development. They're basically independent applications which run on different threads and you pass messages (which are simply objects) between them. The browsers themselves are also moving their layout and rendering engines to be multithreaded.
A well designed app would do very little on the UI thread and would pass messages between the UI thread and the webworkers, it would also spin up webworkers on demand to offload work. It's not as easy as some environments to develop in, but it's also fairly straight forward once you make the effort to do it.
If I was designing react for instance I'd have all the virtual dom / diffing stuff being handled by a webworker and then would only pass the updates through to the UI when computation is completed.
> Or, how much speedup would you estimate, if we convert all GoogleDocs functionalities into Word97? I'd estimate 1000 times. :) Or perhaps, the computation power for drawing a cursor alone will far exceed the whole Word97.
Whatever the speedup would be the speedup the users would likely not notice or will only notice a slight improvement.
And yes, drawing the cursor as a 1px wide div is computationally intensive, I guess you're referring to that article posted on HN awhile back that VS Code used 13% of the CPU just to render the cursor? :) Doing stuff outside of content editable is not ideal for writing applications as you lose a lot of system settings (like keyboard mappings, cursor blink speed, etc) that the browser automatically translates to the built in cursor.
> Yes, you have webworkers for multi threaded development. They're basically independent applications which run on different threads and you pass messages (which are simply objects) between them. The browsers themselves are also moving their layout and rendering engines to be multithreaded.
Yes I'm actually referring to this. The programming model. Workers are great if you can divide and conquer the problem and offload (exactly what you have mentioned). But the messaging payload would be high under some circumstances when you have to repetitively copy duplicate a lot of data to start a worker. I don't have hands-on experience with web workers but I think it is unlikely to solve the messaging overhead without introducing channels/threads. Workers are more like processes. And currently they don't have Copy-On-Write. Of course we may see improvements over time, but this is to gradually reinvent all the possible wheels from an operating system, in order to be as performant as an OS.
> A well designed app would do very little on the UI thread
I partially agree. It may do little, but in turn, the consequence may be huge. This is because DOM is not a zero-cost abstraction of a UI. It does not understand what the programmer really want to do if, say, he/she constantly ramping the transparency of a 1px div. Too much happens before the cursor blink is reflected onto a framebuffer, compared to a "native" GUI program. I think it will be very helpful if the DOM can be declarative as in XAML, where you can really say <ComplexGUIElement ... /> without translating them eventually into barebone bits. Developers are paying too much (the consequence) to customize this representation.
> Whatever the speedup would be the speedup the users would likely not notice or will only notice a slight improvement.
There won't be a faster-than-light word processor but I really want it to:
1. Start immediately (say 10ms instead of 1000ms) when I call it up
2. Response immediately when I interact (say 1ms instead of 100ms)
3. Reduce visual distractions until we get full 120fps. Don't do animations if we don't have 120fps.
4. If the above requirements can always be satisfied by upgrading to a better computer.
The speedup will guarantee 4) and make the performance scalable. But currently the web apps lag no matter I use a cellphone or a flagship workstation. This clearly indicates that the performance of a web app does not scale linearly with computation power, and this is not about how much javascript is executed (that part will scale I believe).
> But modern UI in Office is only an evolution of what was there in the 90s and hasn't changed fundamentally either yet it doesn't feel any faster.
Sorry, but this is absolutely untrue. The Ribbon UI introduced in Office 2007 was a massive change functionally and visually. You went from a static toolbar that would just show and hide buttons to live categories which not only resize but change their options and layout as you customize or resize the window. There's now drop downs, input fields built in, live previews in the document as you hover over tools and options, and more.
Same for the new Backstage UI introduced in Office 2013 for saving files, viewing recents, and other file and option operations. You have full screen animations and interactions.
Hell, Microsoft even made the text cursor fade in and out instead of blinking, which needs more processing power.
Could Microsoft have optimized it more? Yes. But they definitely have added tons to it since the 90s and even mid-00's to justify why it's slower.
But the original article was saying that the UI paradigms are the same but the interface is slower. The UI paradigm on the web is as far removed from 90s Windows as modern Windows is if not more.
All these points are no different to how web tech is evolving UI so should be discounted the same way that web technology is.
Excel hasn't evolved at all since 2003. They added a couple new chart types and changed some colors. But functionally they haven't made any significant change. In fact some grey controls have litterally not been updated in 20 years (try clicking the fx button near the formula bar with the same broken search feature since the 90s).
There are lots of things they could do. Linking data between spreadsheets or between excel and powerpoint sucks (a significant part of the user base needs to prepare decks and reports that contain lots of charts and numeric tables).
They could learn from Apple's approach with numbers where a worksheet is a canvas on which you can place multiple tables or charts or diagrams, which makes a lot more sense than the single grid per worksheet approach (think having to display two tables one above the other, you are forced to align columns of different widths, and how does the top table overflow?).
Users who need to script or create UDF are stuck with a VB6 editor that hasn't seen any update in 20y and an antiquated language.
I could continue the list for a while. These are basic core features. There might be 1000 people in the world who use power BI, and only because their IT dept set it up for them. But millions of users who's life would be made easier with the suggestions I made above.
> "They could learn from Apple's approach with numbers where a worksheet is a canvas on which you can place multiple tables or charts or diagrams"
You can do this with Excel also. When was the last time you used Excel?
> "There might be 1000 people in the world who use power BI, and only because their IT dept set it up for them."
The Power BI features in Excel come ready to use out of the box. Clearly you've never used them, but they're by far the best new features in modern Excel. Any power user of Excel that isn't exploring them is missing out.
Mark separate areas on the same worksheet as tables, set chart location to be the same worksheet as the tables. If you're bothered by the gridlines those can be turned off. Not much to it really. You can also create dashboard-style content with PowerView (which is one of the PowerBI features built into Excel).
No need to be condescending, I am a heavy excel user, possibly more than you.
Tables may be fine in Excel for data but useless for any custom logic, which is what I use the most excel for. I am not aware that tables overflow with a scrollbar like Apple's approach allows. If you need to add more rows to the top table, the bottom table goes off screen. If the top table contains a very wide column, the bottom table needs to have the same column width. These are all inconveniences that apple's approach solves (and wouldn't be very hard to implement in excel while preserving backward compatibility). I don't see how Excel tables solve any of that.
> "No need to be condescending, I am a heavy excel user, possibly more than you."
Believe what you want.
> "I am not aware that tables overflow with a scrollbar like Apple's approach allows."
If scrollbars matter to you then you can use Power View, which is one of the Power BI features available in Excel. To get a better idea of how it works, take a look at this short video:
The point I'm making by bringing up VisiCalc is, if your needs are basic enough, any spreadsheet program will do the job, even the first one. You'll only understand why the more modern desktop spreadsheet programs are more advanced if you have a reason to use the newer features.
There's nothing wrong with VisiCalc. It's incredibly basic (even for the time), but I still have a copy on my computer - I admit, though, that I use Lotus 123 more often.
Power users are the vector to spread Microsoft-only spreadsheet viruses.
This is what gets lots on most people.
The power users create some "nifty" spreadsheet that runs some "important" piece of a business. That "nifty" spreadsheet now requires Microsoft Excel and forces everybody in the company to have a copy if they want access to it.
Those power users are covering for the lack of resources and/or knowledge in a company's IT department. Excel may not be the best tool for long tail apps, but there's no arguing with its ability to quickly build useful tools. The power user that you see as spreading a virus is essentially successful as they can innovate more quickly than anyone else in the company. If open source tools gave this power user the same ability to rapidly innovate, then they should be made available to them (along with training on how to use this software).
>
>*Every negative thing said about the web is true of every other platform, so far. It just seems to ignore how bad software has always been (on average). "Web development is slowly reinventing the 1990's." The 90s were slowly reinventing UNIX and stuff invented at Bell Labs. "Web apps are impossible to secure." Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.
I don't see how this is an argument in favor of the web. If anything, it re-enforces the accusation TFA made against it even more.
If "The 90s were slowly reinventing UNIX" then why would be recreating the 90s today a good thing?
If the 90s "slowly reinvented UNIX", then the correct thing to do would be for the web today to either be a fully modern 2017-worthy technology, or at least take its starting point from where the 90s ENDED, not re-invent the 90s.
"If the 90s "slowly reinvented UNIX", then the correct thing to do would be for the web today to either be a fully modern 2017-worthy technology, or at least take its starting point from where the 90s ENDED, not re-invent the 90s."
Since when has an inexperienced mob of people ever done the correct thing on the first try?
And, yet, the mob has continued the very fine legacy of those 90s (and 80s and 70s) software developers in pushing software into more places it's never been before. Somehow, it's working, despite the relative ignorance and stupidity of the average developer (myself included) in their understanding of history.
I think I'm being misinterpreted as saying the web is great because it has no flaws. Which is not my intention. The web has many ugly flaws. The web is great because of what it does despite those flaws. And, also, a lot of those flaws come down to inexperience, which we can't cure with technology. It seems likely it can only be cured by making the same dumb mistakes a few times until it becomes collective wisdom that it was a dumb mistake...the kind that gets beaten out of programmers very early during their learning process.
I guess I'm just more optimistic about the web-as-platform than most. I see all its flaws, I just don't think they should result in a death sentence.
But, if you show me something better, I'll gladly participate.
Better for what? The web is getting worse and worse for the users. Before this JavaScript craze it was predictable, bookmarkable, usable and reasonably performant.
Now it's slow, burns your battery, it's full of ads/tracking and anti-patterns like infinite scroll or SPAs and view source is useless.
For me, a site like HN or amazon (with some reservations) is the pinnacle of what the web is able to offer.
>Since when has an inexperienced mob of people ever done the correct thing on the first try?
Only web standards are not created by an "inexperienced mob of people" but by large multinationals, multiple CS PhDs, and seasoned developers.
And if we consider every generation of new developers an "inexperienced mob of people", then we have absolutely no claim to ever being called an industry and engineers.
>And, yet, the mob has continued the very fine legacy of those 90s (and 80s and 70s) software developers in pushing software into more places it's never been before. Somehow, it's working
Working in what? Mobile apps, counting in the millions, have actually "pushed software into more places it's never been before", and most of those are usually native, or done with non-web technologies (of course web stacks encroach there too). For most people, those mobile apps on their smartphones is how they interact most of the time with the internet, not www, even if they have a laptop at home or at work. For younger people even more so.
>But, if you show me something better, I'll gladly participate.
Better things come from people feeling the need to create them. They don't appear on their own, and people migrate to them. Else people can be stuck with the same BS for decades, centuries or millennia (consider dynasties ruling for centuries before the people of some country attempt to bring them down in favor of democracy).
This is a WEB APP https://3d.delavega.us using 3js. It can run on most iOS and Android smartphones, most Windows and macOS machines and Linux computers.
It is likely to run on over a billion devices, and no installation required. Can a non webapp or native app be better than this?
My alarm siren went off when the commentary started critiquing the “complexity” of Google docs as compared to Windows explorer circa 1998.
Complex things are often complex because the work that we do as humans is, well, complicated.
A journey map painstakingly built by an epic designer and smart person at large may design the ultimate document template that addresses every need that you are aware of. Then I come along and want something else.
When the answer is that everything is wrong, the question is usually wrong.
Your alarm shouldn't go off, because the example is very much apt. The article compared the UI offered by both, and they are indeed directly comparable.
As for the work Google Docs do, come on, they're a glorified Markdown editor, they lose in any kind of comparison with Windows 95-era Word.
Real time collaboration is an awesome feature and essentially what justifies Google Docs' existence, as it's behind Word in practically every other area (though I find Sheets more intuitive than Excel, that might just be familiarity).
The technology to do RTC is not particularly resource intensive on the client side. Nor is it web specific: the native Android versions of Google Docs don't use the web but they do support RTC.
RTC is enabled by an algorithm called "operational transform". It's a very clever algorithm that is rather tricky to implement properly, but it doesn't involve loading huge datasets or solving vast numbers of equations. It's ultimately still just about manipulating text. You could have implemented the client side part of it on Windows 95 without trouble, I'd think. At least I can't see any obvious problems with doing so, assuming a decent Windows 95 machine like one with 8 or 16mb of RAM.
OT does, however, require the entire app to be built around the concept. You can't easily retrofit it to an existing editor.
The reason Word 95 didn't have Docs style realtime editing is simply because back then networks were kind of rare, slow, crappy and word processor designers didn't know about the OT algorithm back then because it was still being researched by academia.
The real question is - if we had a better client side platform on laptops and desktops, one that supported some of the best features of the web without the rest, would Docs RTC still be possible? Surely yes!
You can say the same about Windows 2016. Recommended RAM has gone up more than 100-fold, from 16MB to 2000MB. Developers use the resources made available to them, that has nothing to do with the web.
No one is writing web apps using javascript because they're "using the resources available" to them, in the form of powerful hardware. They're using the only TOOLS available (javascript). The problem is we just don't have a better choice, at least on the front-end.
Every generation of programmers _does_ learn from previous work, and every new platform starts from scratch learning the lessons, and incrementally evolves. A Hello World GUI on Windows 95 will require calling into a complex and undecipherable Win32 API; a Hello World on the web needs one simple line. Platforms do get frozen over time (like the Linux kernel), and people use it to build useful things with low effort. The Linux kernel is a result of incremental evolution: Linus proudly says that it's not designed.
There are severe shortcomings in all platforms that have aged. Why does power management in Linux suck so hard? Why can't we have networked filesystems by default (NFS is quite bad btw)? Until somewhat recently (~7 years), audio on Linux was a disaster: "Linux was never designed to do low-latency audio, or even handle multiple audio streams (anyone remember upmixing in PulseAudio?)". What the hell are UNIX sockets? Is there no modern way for desktop applications to talk to each other? (DBus was recently merged into the kernel). Why doesn't it have a native display engine? (X11?)
Today, it's more fashionable to criticize the web, since majority of the industry programmers endure it. Sure, there are some "simple" things that are just "not possible" with the web (everyone's pet peeve: centering). Yes, you lose functionality of a desktop application, but that's the whole point of a new platform: make what people really need easy, at the cost of other functionality. For an example, see how Emacs has been turned into a web app, in the form of Atom? You don't have to write hundreds of lines of arcane elisp, but you also don't get many features. Atom is a distillation of editor features that people really want.
I don't understand the criticism of transpiling everything to Js; you do, after all, compile all desktop applications to x86 assembly anyway. x86 assembly is another awful standard: it has evolved into ugliness (ARM offers some hope). Every platform was designed to start out with, and evolved into ugliness as it aged. We already have a rethink of part of the system: wasm looks quite promising, and you'll soon be able to write your Idris to run in a web browser.
Look, if we start comparing today's way of writing end-user applications to Delphi we're just going to sit here crying all the time. It was a beauty and a blessing, and I've never seen any way to develop GUI applications surpass the Delphi Visual Component Library.
I agree - nothing new.
Reason: next generation of developers has to make the same mistakes as the previous generation. I mean why wouldn't they? It's not like there is any institutional memory in this profession.
> Most web apps are built in languages that don't have buffer overrun problems.
The author is using "buffer" in a different sense than you are. You're thinking of a malloc'd buffer. The author is using "buffer" more abstractly, to refer to a data segment, such as a JSON or HTML string, or a string of encoded form data. His point is that that latter type of "buffer" has no declared length, and needs to be parsed in order to determine where it ends, and that as a result it is subject to problems that one can term "buffer overrun" by analogy with the traditional C scenario in which one obtains a pointer to some memory that you should not have access to.
"Most web apps are built in languages that don't have buffer overrun problems."
You misunderstood the author's point. Things like SQL injection are really equivalent to buffer overflow attacks -- data creeping into the code because of poor bounds checking.
But SQL injection isn't a thing unique to the web right? Like, SQL injection is totally a thing with c/c++ as well. Maybe focus on one problem at a time.
SQL injection is to do with SQL, a text based protocol for expressing commands to a server. Like all text based protocols trying to combine it with user-provided data immediately takes you into a world of peculiar escaping rules, magic quotes and constant security failures.
The fix for SQL injection is to work with binary APIs and protocols more. Parameterised queries are the smallest step to that world, where the user-supplied data rides alongside the query itself in separated length-checked buffers (well, assuming you're not writing buggy C - let's presume modern bounds checking languages here). They aren't combined back into text, instead the database engine itself knows how to combine them when it converts the SQL to its own internal binary in-memory representation, as IR objects.
Another fix is to move entirely to the world of type safe, bounds checked APIs via an ORM. But then you pay the cost of the impedance mismatch between the object and relational realms, which isn't great. I will provide a solution for this in part II.
> Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.
Many programs in the 90s, especially of the simple CRUD type, were written in VisualBasic and other RAD tools, as they were known at the time, and later Java.
> Is this really a common problem in web apps? Most web apps are built in languages that don't have buffer overrun problems.
It's not buffer overrun in the "undefined behavior" sense, but rather problems relating to the need to parse text data, which can be tricky and susceptible to injection attacks.
"Many programs in the 90s, especially of the simple CRUD type, were written in VisualBasic and other RAD tools, as they were known at the time, and later Java."
And, we complained endlessly about how slow and bloated those programs were. So it goes.
As an iOS developer, I would say state of web development is not true for iOS. Sure it is slowly evolved to the current state but the framework much more thought out than their web counter part.
"As an iOS developer" is another way of saying "I can't see past the walls of Apple's walled garden".
Seriously, the reactive frameworks (any really: React/VueJS/Preact/...) used in tandem with a separate state container (Redux, Vuex...) is a much better "thought out" approach to application programming than anything in the Cocoa/Swift world.
> Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.
Back then the compilers sucked. They would take complete crap of code and still it would work. They were like browsers are today. (from my experience from going through one old MUD code)
Today the song is different. Not only will the compilers warn you of many things, there's even tools for static analysis (and dynamic). So the argument that C (and even the more complex C++) is inherently insecure holds much less weight (just go run old code through a static analyzer, or a normal compiler for that matter).
That said there's only one way to write a "secure program", and that is formal verification.
People that talk with a serious tone should back up their claims, at least that's my opinion.
C and C++ are definitely not as secure as a language with automatic memory management. OOB reads/writes, type confusion, and UAF are all very real problems in C and C++.
Static analysis helps, but it can't catch everything. I work on a modern C++ codebase, and we still face all of these issues.
Formal verification is infeasible for most software projects, but they can get guaranteed type/memory safety by using a language proven to be safe. C/C++ can't give you that, but JavaScript might be able to.
Not as secure, but nowhere near the death traps as some(many?) describe them.
Things that are written in C these days are usually written in C for performance reasons. FFMPEG would not have even close to the performance it has if it was written in a memory safe language instead of C and assembly. I doubt that a magical compiler (and/or language) will appear in my lifetime that can compile high level code into performant machine code, especially when it comes to memory management. (note that C also has advantages other then performance)
JS doesn't even have a proper specification, let alone a bug-free interpreter/compiler.
EDIT: AFAIK verifying memory access is part of a formal verification, where memory is also modeled mathematically.
C and C++ simply weren't designed with safety in mind. Even with a good compiler and static analysis, security-critical bugs will slip through the net that simply wouldn't happen in other languages. It's not so much a question of whether it's possible to write safe C, but whether it's natural or easy. C is unsafe by default.
People always shit on C for security, perhaps rightly so. But I would like to point out that 99% of everything out there has C or C++ at its base. cpython is c, java is C++, rust is based on llvm which is C++. Yes implementing your user facing application in some non-c language may improve security, but you are still depending on C when you do so.
So is C the problem, or is it modern CPU architecture? C has stuck around for so long because of how close it is to assembly language. There will always be a need for a language that is one layer above assembly, and currently assembly is incredibly hard to secure.
> Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.
You know that most today OS are written in C or C++ ?
Also many higher level languages are it self written in C or C++?
Write secure applications is hard and need a lot of discipline and knowledge that most developers simple do not have.
Better tools can and need to help here as well as better languages. But it is still possible to write pretty secure and efficient software in modern C++. Yes it is not easy but possible.
In another comment on this page, there is a developer who claims a web server was made more secure by writing it in Perl (which is written in C/C++). The original webserver was written in C.
> Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.
> "Buffers that don’t specify their length"
And yet, we found good ways to eliminate the most common sources of these problems by using new languages. The web, on the other hand, is an amalgamate of several different technologies and creating a new language won't make it more secure.
C is not impossible to secure, actually. There are popular C programs which are more robust than your average high-level dynamic language program. It takes a deep commitment (hence a lack of good examples), but there is generally a clear path to a well-behaved program in C, and there's nothing about C itself which prevents you from writing secure code. On the web, you must actively mitigate pitfalls of the platform itself, in C you just have to make sure your program is itself well behaved.
You might argue either way, but a straightforward C program can be correct if it is well formulated, but a straightforward web app can not be correct unless it is fully mitigated.
Nitroglycerin is a perfectly serviceable explosive for mining purposes but there is a really good reason it is called the Nobel prize and it isn't because the folks working with nitroglycerin "lacked a deep commitment to safety". Alfred Nobel invented dynamite to create a safer explosive and his work directly improved safety (and he made a fortune in the process).
>C is not impossible to secure
Expert compiler writers and computer scientists disagree with this assertion. History seems to be on their side.
Writing "secure" C requires meticulous attention to detail at every level, intimate knowledge of undefined behavior _and_ of compiler optimization, along with the exact options passed to the compiler. It requires comprehensive reasoning about signed integer behavior and massive amounts of boilerplate to check for potential overflow. It also requires extensive data-flow analysis to prove the provenance of all values (as Heartbleed taught us) because a single mistake in calculating a length leads to memory corruption.
To put it another way: No one can write fully secure C code. It has never been done to date. All non-trivial programs written in C contain exploitable security vulnerabilities. The combinatorial explosion of complexity makes it impossible both to formally verify and to permit human reasoning about the global behavior for all likely inputs, let alone unlikely ones.
How many really truly secure C programs have ever been released into the wild? Maybe qmail? But qmail did it by completely rewriting the C standard library.
Admittedly few, but generally in native land you have the ability to plaster over platform deficiencies with equally-well-performing code. On the web, you can never really compete with the execution speed or integration of the native code in the browser, so you have to accept whatever is there.
I'd say OpenSSH (since SSH2) has a better track record than most webapps, as unfair a comparison as that is. In terms of local robustness, there's SeL4, which is also a bit unfair (since it took about a decade for a team of geniuses to prove enough properties to make it probably not very buggy).
I wouldn't consider seL4 to be a "C project". Yes, their github repository is mostly C, but the process of writing seL4 was extremely involved: write the kernel in Haskell, then write it again in C, then prove that the C is equivalent to the Haskell. seL4 is ~9000 lines of C, ~600 lines of asm, and ~200,000 lines of Isabelle (theorem prover).
I don't disagree with your use of OpenSSH as an example.
And yet the code that controls the spacecraft launch and control is written in C. I'll still agree with you that it's really hard to write good secure C code.
Totally agree, and would add that it's no coincidence that articles like these tend to conflate "web programming" with the current state of the JS ecosystem. Yes JS is kinda crazy if you don't know how to select the right tooling for the job (just like every other popular language), but the leap to the web in general - getting people to go along with the conflation - is not possible without a good deal of FUD.
Instead of thinking of it as buffers, you just have to encode/decode for the proper environment. Such repetitive stuff is easily implemented in stack layers.
I just find the way DOM/CSS does layout and styling to be completely convoluted and crazy compared to any desktop toolkit since 1990. Center anything either vertically or horizontally - that cannot require me to google and most importantly cannot have multiple different solutions. Simple things should not just have simple solutions, they should have one simple solution.
Memory-unsafe programs on the desktop should go the same way as the HTML layout model.
As the other commenter said, flexbox and gridbox help alleviate many issues that used to be commonly raised a few years back.
Check out Yoga [0]. It's a small layout engine based on flexbox and the CSS box moel. It doesn't cover all use-cases, but it's pretty powerful for its size. I
It's important to remember that CSS and the DOM was initially created and developed with certain kinds of documents in mind. Both are certainly quirky and missing a lot of features, but I wouldn't say they're as bad as many people make it out to be. Based on my experience with native desktop toolkits, they're all quirky in one way or another. One of the biggest issues with modern CSS is that it doesn't have sensible defaults for web apps.
Could you provide an example of your preferred approach to handling layout and styles, and talk a bit about what why you consider it superior?
What key features do you consider missing from CSS and the web?
What bothers me is that I can't make rules with the same expressive power as a regex.
Also css lacks properties for controlling wrapping limits and non-linear image scaling. And for some reason I always have to optimize on either width or height, I can't control both perfectly.
I don't understand the first point, could you clarify? Do you mean you wish to define your own CSS properties? You may find it exciting to learn that there's ongoing work to enable this functionality and more through Houdini [0]. You can check out a few examples at this houdini-samples [1] repo.
I'm unclear on what you mean by wrapping limits and non-linear image scaling. Could you provide an example of what you'd like to achieve?
As for having to optimize for width or height. Have you looked into display: grid? I believe it may help enable the kind of layout you're interesting in achieving.
For those customers, I gave them an ‘application’ version that was just nw.js (basically chrome). Worked reasonably well for the customers that had somewhat recent OS on their desktops or terminal server.
Your customers sound like the smartest and most sensible people I've heard about in a while. The current way the Web is changing makes it almost impossible for in-house use in a large corporate environment. But at the same time, it's almost impossible to avoid it entirely; so instead do the smart thing and put a hard stop on it and refuse to hit the moving target.
I don't disagree. I feel like I'm caught in the middle. I personally prefer the old internet where websites could have simple design and typography and still be perceived to have value.
This to both. However, the problem with the web is there are old implementations that must be maintained in browsers for backwards compatibility. The issue with this is that it increases the barrier to entry for web development because it's much harder for a new person to even know what options to gravitate to.
Of course, there are books and guides to help people, but how would someone figure out which guides are worth it? There are a lot of highly rated books on the topic of web development and if you don't already know what you need, it can be daunting.
Well _it is_ crazy. We can't trace an alternate history and work with that. We work with what we have.
I think, here, we might looking at it with the wrong lens. I'm unable to find the right words to say this. Let me say this statement feels ungrateful. Web is the largest and fastest growing ecosystem of software we've right now (refer: community size, number of projects on github, say, in Javascipt, CSS, and other web technologies).
You're comparing what is what should've been. By that measure, any human activity will fall short of not only yours but anybody's expectations.
You'd think that getting sane layout control is easy, but apparently it's not. Getting a lot of humans to agree on a fast growing technology is hard (it seems like).
PS: I'm not saying "nothing could've been better, be happy with what you have", not at all. I'm just saying this seems like complaining and a better approach is to try and make it better
I needn't have written up a long tirade for such a simple statement. I see that this is the same sentiment that's espoused by several others in this thread, and thought I'd try and provide a different perspective to look at this with
Fair enough. I just get sad knowing what I'm missing. I've worked with a bunch of desktop GUI builder IDEs (Visual Basic, .Net WinForms, WPF/XAML, and Qt) and I've seen the immense power they have in terms of developer productivity and application performance. Something like XAML is especially interesting because it brings the styling and responsiveness of HTML/CSS to the GUI builder paradigm. I started off with them and then have slowly transitioned to working entirely with web technologies (PHP, Rails, Angular, React, you name it). Not without reluctance for sure! It's a Faustian bargain to me -- trading off overall inferior technology and developer experience for the sheer reach and ease of deployment of the web. It's nuts to me to design a visual thing like a UI by writing lines of code. The GUI builders of yore really nailed this by allowing you to design something visual using a visual modality (drag n drop, realtime layout designers, etc.). I try to explain this folks who've only ever developed for web and usually their eyes glaze over. They can't seem to (or have an incentive not to?) appreciate the impedance mismatches and the fundamental trades being made with web user interfaces.
Your perspective is very interesting to me. It makes me think that as software gets better/easier to write, as web development has become lately, people want to question why it is becoming better/easier. I think this self-reflection we have all been doing on the web is what is causing people to post so many threads and articles with this being the topic.
Sane layout control for apps was a solved problem 15 years ago (well, for some definition of sane). Look at toolkits like Swing, GTK2, Qt, heck even Cocoa has better layout control for apps than HTML.
Flexbox is essentially an import of those concepts to CSS. There are no new ideas there.
But now flip it around and try to make a beautiful, responsive document in Swing or GTK. The layout managers that make them so great for laying out UIs won't help you much there. They can do it, they have layout managers that operate somewhat like a CSS box flow, but it won't be as natural or as easy.
So it's worth considering if it's easier to evolve HTML towards sane layout management for app-like things, or GUI toolkits towards sane layout management for document-like things.
I used vb6 20 years ago and have recently (quite unfortunately) had to learn html/js/css basics. The web is hot garbage for displaying form data compared to microsoft tools circa 1996.
The big problem is they are trying to solve different problems.
Microsoft stuff was going for fixed screen size/resolution, fixed layout, and using a quite limited set of controls.
Web browsers try to be accommodating by default - any screen size (including mobile), zoom built in, and significantly more powerful control primitives that allow enormous flexibility in the way to design things.
If you're building forms applications that only need to work on a PC, the old way was certainly easier, and in fact, Microsoft has WebForms (regular ASP.NET - not MVC or API) that is pretty similar (and doesn't horribly break down so long as you color within the lines, so to speak).
Try to imagine your vb6 app being able to scale down to window the size of a phone screen, and how the WYSIWYG editor for that works even work - I imagine it would be fair to describe it as "hot garbage" also.
Very few web apps actually use the same HTML for desktop and mobile. It's more common for WordPress templates and other document-like things, but the UI constraints on a phone are so different that it's better to create a dedicated UI for them. So I'm not sure judging VB6 by that metric is valuable.
It's not just mobile though - it's different dpi (4k screens are getting more popular), window sizes and zoom levels. Mobile is probably not a target for the forms apps the OP is talking about, but tablets may be, and different generations of various laptops and PCs are.
Web works across everything with little to no extra effort, whereas native app built with WYSIWYG UI builder is going to be constrained to certain hardware and take extra effort for handling display variations.
HiDPI was pioneered by Apple whose UI toolkits aren't particularly responsive at all.
You can certainly handle different window sizes with traditional UI layout managers. The only thing they don't do much of is totally changing the entire UI layout based on window size, and that's only because it's so rare to have a single app that's actually identical between tiny and huge screens.
WPF is much closer to web development than the drag and drop WYSIWYG UI development (VB6 / WinForms) the OP was taking about. I've never done UWP but it sounds nearly the same as WPF.
WPF/UWP have exactly the same drag and drop WYSIWYG support as Windows Forms, specially when using Blend, actual components and an healthy market of companies selling them.
I was really surprised when I went and wanted to build a GUI app to find that everyone had abandoned the WYSIWG model completely. You can't just drag your controls over, set their properties, then build the code to drive everything. You have to manually wrestle with containers and whatnot even for desktop things. I could potentially see it as an acceptable tradeoff for wide device compatibility (things with substantially different screen dimensions, etc)... but I still have yet to figure out why the layout systems of the past couldn't simply be made a little bit smart to deduce the constraints necessary to result in the same development experience as before.
I've come to believe that some number of developers actually like it the hard way. It's the only explanation that makes sense. We have gone so far backward in GUI development tools.
And what I don't understand is how the same people managed to get it completely wrong with WPF 10y later, with an extremely convoluted syntax, no autocomplete, poor tooling, etc
The problem is that if you present an average web user with the interface you can design (quickly and efficiently) in VB6, they'll spit in your face.
Much of the complexity of web design is not in the tools; it's in the fact that users don't expect a standard whatsoever, they just expect their UIs to be as slick and customly designed as magazines. If every website was written using the same standard, predefined set of widgets and components, the complexity would disappear.
This is pure illusion. Otherwise reddit, 4chan, hn, google (until 2010), craigslist, and even amazon would suffocate and go away. The fact is that what makes a web app / web site / whatever be liked by the users is the content and the value; and often times a 2005 porn pop-under is better at that than a today's chic, pedantically over-designed website with grey huge lettering, multi-MB graphics, and tonnes of wasted empty areas. They are basically like coke, blunt useless stuff with lots of sugar.
Those communities are all very niche, and in fact part of their brand and image is in their design. Even though they are less flashy, that is the point. Try to convince the owner of a clothing ecommerce site that their store should look like a 4chan bulletin board while trying to sell high priced garments to the public, or that the Coke website can't have a vibrant design in line with the rest of their branding.
So Google and Amazon are niche? And if the plainness/unelaboratedness of deaign is part if Reddit's identity, why most subreddits use elaborate custom CSS? And what I'm saying is different anyways: when you provide some real value, your design is irrelevant. Otherwise you are employing the put-moar-sugar-in-it technique of marketing. Kudos if you make it work, but it's far fetched to say that it's necessary.
The are wildly popular but they are in a niche yes. When they were not the Google we know today there were not many providing the same service or value, and potentially still isn't, so that is why they could get away with bland design. It was never bad design mind you.
Amazon is also king of providing value in their markets, and their markets are also apathetic toward flashy design. I don't need animations when I am provisioning an AWS instance nor when I am buying goods at the lowest possible price I can find.
However if I were not me, and I were shopping for luxury or boutique goods, a site that looked like Amazon would not instill me with confidence.
My point is just that the web has diverse design and UX needs and the current toolset caters for that. If someone managed to build a platform with those benefits and more and the webs market penetration then I would be on board.
I will argue still that it is necessary if you want an alternative to the web, as the alternative has to be a better value proposition for the end user not the developer.
I don't understand one point here - why wouldn't Coke be able to have vibrancy? Animated .GIF images have worked very reliably for web layouts since... forever; and I don't think anyone's going to start calling for everything to be pastels online or in a fixed color scheme. I don't feel that Reddit is by any means 'niche', either. And of course a clothing store (which is an e-commerce product) should look different from an image board; again, no one in their right mind is going to demand otherwise. But they should be able to be defined by the same set of tools (personally, when I design webpages, even under WordPress, I still use HTML tables and the center tag. Heck, I've used the marquee tag in the last year.
I am a huge proponent of keeping it simple for websites. If you can achieve the same branding with less tooling then that is ace, and it is what I try to do. Less complexity means it's more maintainable and normally quicker to build.
But it will need to be the same experience that the client asked for. Coke is never going to ask you for a react website with webpack tooling and a lambda backend, they are going to come to you with some grand vision of an application that their marketing team imagined in the shower months ago and has been workshopped into a mess. You may or may not be able to deliver that with simple HTML and CSS.
I am also keen for the web to move toward some kind of stability in technology as well, the churn and wheel reinvention factory that we currently have is creating a bit of a mess but I don't think it's worth throwing the web away just yet.
My response to "grand vision" projects is to say, "work up an actual spec of what's necessary, and I'll respond based on what's technically feasible." I find that when they finally get their heads out of their entrails, most needs are very simple. Animation can be done in .gif, static images can be supported by imagemaps and tables.
I use Amazon just because it has great service. But it's UI is atrocious. Flipkart blows it apart when it comes to UI. So easy to search based on several sub parts. it's Mobile UI is also very well done.
Indeed. The lack of easy theming (i.e. difficulty of producing a unique visual brand) is one reason why desktop toolkits lost out to the web, amongst many others.
Most enterprise web apps use bootstrap, that is quite Standart. But they do it on top of react/angular/grunt/we pack and a zillion npm packages to choose and keep updated. Nothing of this was necessary to make VB6 applications.
This is true, the monstrosity I inherited at work is built on bootstrap (2...and it was started after 3 came out..) but modern tooling has radically improved.
yarn/typescript and (though some days I hate it..it has gotten better) webpack largely make it feel sane(r).
That said getting to a point where I was comfortable with all three was insanely more complex and time consuming that picking up Delphi 6 was in the early 2000's.
Shrugs, the beast is what it is until someone does something better.
VB6's form designer (and UI framework that underlies it) has one crucial problem: it has basically zero understanding of flexible layouts. As a result, things break as soon as you try to make an easily resizable window, or font size or family changes (even if it's something as simple as accommodating high DPI), or you localize the dialog and some strings become longer.
This lack of support for anything other than hardcoded absolute layout is exactly what made it so simple and easy to use. It's the equivalent of doing document layout by padding with spaces - it works for simple cases, and it's very easy to teach people, but it's a mess for anything even remotely complicated.
I don't think anyone is advocating vb6 forms for today's tasks, it is a 30y old technology that hasn't been updated in 20y. But its simplicity and effectiveness was remarkable and should be considered a benchmark when designing new UI tools and technologies.
> it has basically zero understanding of flexible layouts.
That's largely a non-issue to me. If I need anything fancy, I'll draw it myself. The simple stuff ought to be simple.
> As a result, things break as soon as you try to make an easily resizable window
Au contraire! It is much easier to make a resizable window when you are in full control of how nested widgets are resized along with it. That being said, some automation is fine (e.g., how MFC resizes views in response to their parent frame being resized) as long as simplicity isn't lost in the process (I'm looking at you, CSS).
CSS is incredibly simple if all you care about is absolute positioning.
It's just that nobody wants to make a Win32 style app with absolute positioning on the Web. That's because responsive apps are superior to nonresizable, manually positioned UIs.
Are they? Most 'web apps' I use have a preferable browser size; if you use them at a smaller size they still work (they are responsive) but are just unusable for anything sane. So superior... I made those layouts with Delphi too early 90s and the same consistent behavior was true then as it is now; 99% (to not get 'source?' questions; I have been writing consumer software for almost 30 years and in my experience + the experience of peers I talk to) of consumer users of software click on maximize the first instance they open anything; browser or non browser. So sure, I use tiling window managers and like different windows, but most people don't, hence the success of tablets; they are simple because 1 app, maximized at a time. And those apps sure are responsive but they don't need to be; they look the same on all tablets for the resolution they were designed to be used at. Just simply scaling them would've worked fine for most people and usecases. You would have to write things twice; one for small screens (phones) and one for big screens (desktops) but that's not really that uncommon now either.
Delphi was better in that regard, because you could anchor sides and corners of widgets to their containers. In many cases, it was sufficient to allow for a resizable layout.
But it doesn't solve the problem with high DPI, changing fonts, and localized strings being sometimes significantly longer, requiring widgets to be resized to accommodate them.
Agreed. And yes, that needs some attention, but most people doing responsive web do not account for most of that either. What does changing fonts mean? You design something for a font and then change it afterwards or?
When I click on some languages (I am not native English and my native language, Dutch, is not very high on the list of priorities for most companies) in some of the biggest companies in the world, you notice it just wasn't designed for that. From just making it wrap and enlarge to break the design to simple sticking outside the box.
For some localizations (Chinese for one) you will have to redesign anyway because 'our' (not sure how to describe) designs simply do not work/sell over there.
Most global companies have a local presence doing their local sites; I know some, even inside the EU, very big companies that have a site per country and have the html/css look 'the same-ish' for the user but completely different when you check the source to accomodate for local taste / language.
I like the dream of this working, as I am a programmer, but I don't see it in real life and I find html/css just painful to work with; not difficult but painful compared to most desktop GUI tech. Flexbox etc is changing that a bit but still it looks like people are shoehorning everything in this html5 stuff just because they desperately not want to use/learn other things instead of using the best tool for the job.
Disclaimer: I am old and have seen this before. I do create webapps and use React (new license makes it workable outside hobby projects), but I will gripe about it like the author of the blog post.
> And yes, that needs some attention, but most people doing responsive web do not account for most of that either. What does changing fonts mean? You design something for a font and then change it afterwards or?
Think about user changing the default UI font. OS X and Windows both make it difficult to impossible, and for this exact reason. On Linux, though, it's common and expected (which is probably why all UI frameworks that target it do have some decent dynamic layout support).
But aside from font family, there's also the issue of font size. That one can be cranked up on high-DPI displays, or for accessibility purposes.
> I find html/css just painful to work with
Don't get me wrong, I'm certainly not praising HTML5 and CSS here. They're vastly overcomplicated for what they do, for app development. And layouts are a long solved problem in desktop UI frameworks - Qt, Tk, Swing, WPF are just a few examples. WPF in particular is a good example of an XML-based markup language specifically for UI, and it's light years ahead of HTML5 in terms of how easy it is to achieve common things, and how flexible things are overall.
If even half the time and energy invested into building "web apps" (including all the Electron-based stuff) went into an existing UI framework - let's say Qt and QML - we'd all be much better off; developers with far more convenient tools, and users with apps that look and feel native, work fast, and with smaller download sizes (because you aren't effectively shipping the whole damn browser with them).
> WPF in particular is a good example of an XML-based markup language specifically for UI, and it's light years ahead of HTML5 in terms of how easy it is to achieve common things, and how flexible things are overall.
This is why I had big hopes in XHTML and the XML components, but then we got HTML5 instead, yet another pile of hacks.
I used to write Petzold-style Win32 apps. I've also written native Cocoa apps as recently as last month, and I've used Qt and GTK+. Having experience with all of these, my preference is still for Web apps, because of the ease of portability and the fact that TypeScript beats C++ for ergonomics, safety, and ecosystem (just having a package manager is huge, even if NPM leaves something to be desired).
I find it fun to write Cocoa apps too, and I do on occasion for throwaway stuff that only I am going to use. But too many people (including me, at home!) simply don't use Macs. When I have to write a portable app, the choices basically come down to GTK+ (doesn't look native anywhere but GNOME on Linux), Qt (requires C++ plus moc and doesn't always look native either, for example on GNOME), or writing everything from scratch for every platform. While the last choice may be the "right" one from a purist's point of view, the extreme amount of work necessary to make duplicate Windows/Mac/Linux (often plus Android and iOS) versions makes it all but out of reach for anyone but big companies.
When I started coding for Win16, my first option was Turbo Pascal with OWL, eventually I started to use Turbo C++ with OWL.
With the switch to Win32, the tools became VB, Delphi, Smalltalk and Visual C++ with MFC.
Like every Windows developer I also own the Petzold book, bought for Window 3.0 development, and other good one from Sybex, probalby the one book that ever explained how to properly use STRICT and Message Crackers introduced with WIndows 3.1 SDK.
However I might have written about five applications in pure Win32 API instead of using one of the former language/frameworks, as requirement for university projects.
In general, I think many developers only have the bare bones native experience without making use of proper RAD tooling, or the UNIX way, which has always been pretty bad in tooling for native GUIs versus Mac and Windows or even OS/2.
Again, "anything fancy" here includes something as simple as a localized dialog. In most commercial apps, this means pretty much everything would require "drawing it yourself".
At which point you can basically throw the designer away, since you'll be writing code to manage layout for all widgets anyway.
> Again, "anything fancy" here includes something as simple as a localized dialog. In most commercial apps, this means pretty much everything would require "drawing it yourself".
My day job is to implement a commercial ERP system that has never been and probably will never be localized.
All software I use on a daily basis is English-only, even when localized versions to my native language exist, because:
(0) The translations are absolutely horrible. Who in their right mind would think that they are actually “helpful”?
(1) Even if the translations weren't horrible, the extra complexity simply isn't worth it. (Admittedly, my tolerance for system complexity is rather low compared to most other users.)
So, from my point of view, when you talk about localization, you might as well introduce yourself as a visitor from a parallel universe (where localization is presumably useful).
Go download NetBeans and create a Swing UI in Matisse. You'll find these issues aren't an issue. You can drag/drop and end up with a flexible, responsive layout that can handle things like strings changing length due to localisation. You can do the same with Scene Builder for JavaFX, although it's not as slick as Matisse. Or even Glade, if you're more a UNIX person. The latter two tools require you to understand box packing but allow for a relatively responsive layout.
The thing they don't do is let you totally change the layout depending on window size. But that's a fairly easy trick to pull off by just swapping out between different UI designs at runtime. There are widgets that can do this for you.
I know that full well. But one thing that you might note about these tools that you've listed, is that they're nowhere near as simple as the VB6 form designer, that was exalted in the comment that started this whole thread. They're more complicated, because they have to deal with dynamic layouts, and you are exposed to this overhead even in visual mode.
WinForms layout managers are a pain to work with in the designer, though. It wasn't written with them in mind - they only showed up in .NET 2.0 - and it shows. Dragging and dropping things doesn't often do what you want them to do, and sometimes things just disappear, and you have to dig them out from the control tree.
Data binding is better in that regard, but once you start doing complicated nested data bindings, it's rather tedious to do it in the designer (because you can't just bind to "A.B.C" - you have to set up a hierarchy of data sources).
Worse yet, you start hitting obscure bugs in the frameworks. Here's an example that I ran into in a real-world production WinForms app ages ago (side note: I wasn't an MSFT employee back then, so this was an external bug report): https://connect.microsoft.com/VisualStudio/feedback/details/...
Having said all that, the aforementioned app was written entirely in WinForms, using designer for all dialogs (of which it had several dozen - we used embedded controls heavily as well), with dynamic layouts and data binding throughout. And it did ship successfully. So it wasn't all that bad. Still, not the kind of experience I'd want to repeat, when I can have WPF and hand-written XAML.
>That's largely a non-issue to me. If I need anything fancy, I'll draw it myself. The simple stuff ought to be simple.
Exactly. At least 90% of the functionality of my forms-based applications use nothing more than the standard UI components Tk provided in the early '90s. Why the web of 2017 still cannot grasp this is unfathomable. To be perfectly honest, I've never seen any toolkit match the productivity of Tcl's Tk of more than two decades ago, and it's even better today:
I've found that , once you build up the right set of components for yourself, you can easily get nice layouts that work on a variety of screens without much work. There sometimes ends up with edge cases, but overall it works well so long as you design things with the tooling on mind.
Meanwhile I've struggled to get things looking well with GTK+ or Tcl/tk. Especially when the UI I'm trying to make is dynamic. The tooling has never seemed very condusive to "fit content"-style UIs
> Especially when the UI I'm trying to make is dynamic.
That's where I still run into problems with CSS too. However, at some point, and not because I started using flexbox / grid, CSS did click for me and now it's mostly second nature to get the layout that I'm going for.
My feeling on this whole topic is that while as a web developer I have often thought "there must be a simpler way", every time I actually start to imagine what that would look like I end up re-imagining something similar to the web stack as it is now. There is a lot of inherent complexity to GUI-based networked client-server applications that need to be responsive, continuously integrated, database-backed, real-time, etc.
I rather think, It's time, to completley ignore sensationalistic rant's like this one.
First of, killing a technology does not solve anything. It just means less options.
So do propose your better solution (and build it) - then we can talk about killing the current thing.
But the way it is today, the web works.
Definitely not flawless and in large parts really ugly (just browsing with open dev-tools is horrifying, when you see all the errors and warnings thrown at you) - but it is big and reduntant enough, that you can mostly choose only the nice parts.
XMLHttpRequest is ugly? (I allways thought so)
Well, there are WebSockets now.
Javascript lacks typesupport etc? - Use Typescript
The whole DOM and *script languages in general are ugly?
Skip it all and use only WebGL and Wasm.
And your app will still run allmost everywhere.
That's the power of the web - that's why it became so important.
It just works.
And it is very easy to start doing it ... so many people did this, who do not have a CS background. And obviously they made horrible things from an academic point of view. But things still kind of worked for them.
And security ... well, so far I have not yet heard of a save language/OS/Plattform where people can work productivly without years of studying the theoretic backgrounds.
So in general yes, I am very open for better designed alternatives. In fact I am looking for one since I started web-developement, but not so much for angry hyperbolic rants like this one. They are not helpful.
What's odd is that this article is so well written that it really clicks with bitter developers who are just confused in the vast sea of alternative technologies with the web. At least that's my guess why there's so many upvotes here.
I feel that anyone who rants like this comes from a low level micromanagement world where they have extreme control over everything any they can't have it with web technologies. Fine, but why shit on other developers carpet?
I think the web platform is one of the best innovation the software industry has experienced since the 80's. Can it be better? Of course, yes, definitely and it's getting better.
I have to agree that posting bitter, "everything is wrong" type of articles is really not constructive.
And for god's sake, learn to use your effin' tools.
>What's odd is that this article is so well written that it really clicks with bitter developers who are just confused in the vast sea of alternative technologies
Hearn has a penchant for penning very well written articles but complaining about systemic issues rather than directly solving them.
He was the kernel of the block size split that occurred in bitcoin starting in 2015 whose vitriolic fruit still bears today. Rather than face up to the fact that his fix proposals were either technically and or politically dead on arrival, he claimed a vast conspiracy.
There are some valid claims in this post but again it’s too heavy on complaining and light on the solutions.
Firstly, I did try and solve the block size problem. Myself and Gavin did Bitcoin XT. It resulted in large DDoS attacks that took out entire regional areas because they contained a single XT node, any mention of XT being banned from the Bitcoin forums, large companies like Coinbase being banned too for simply experimenting with it, and so on. Miners also refused to run it because they were told that this would be democracy, and democracy was dangerous (they were almost all in China, so no surprises there). There was a large, organised and extremely hostile effort to ensure that the solutions we proposed could not be adopted even by those who wanted to. I did a lot more than just write articles.
As for this article, it says there's a second part coming where I propose concrete solutions.
I kind of feel like the web and all that's gone into it is similar to all the work that went into the JVM to make it what it is. Kind of ridiculous, the platform we are on and the sheer amount of intelligence and hard work to make it as efficient as it is is astounding. Maybe because constraints breed creativity to do something with the garbage you have. So I honestly think it's better than it's ever been, but at the same time I understand the author's gripes of "This should have been designed with X in mind first and foremost"
" but at the same time I understand the author's gripes of "This should have been designed with X in mind first and foremost""
Sure thing. If you know exactly what you need and design for that, than the outcome is allways better, than just randomly adding things here and there.
So yes, a newly designed web would be something awesome.
The thing is just, that it is somewhat complicated to design such a thing in the first place. And then build it.
And while you build it, you discover many new things that also absolutely have to go into the design, so you redesign, ...
So in other words, I am curious for his thoughts on a new design and I would welcome it, if it leads to anything. I am just very sceptical, that there is something really awesome and concrete behind it. Vague ideas how to improve things, are allready floating around since the beginning.
The whole DOM and script languages in general are ugly? Skip it all and use only WebGL and Wasm.
Congratulations, now your app is inaccessible to screenreaders and doesn’t lay out properly on half the devices of the internet.
On every platform you have to use the native controls to build a good experience, but the web’s problem is that the native controls are the worst of every app ui platform ever made, so we get layer after layer of framework crud trying to hide the fact that html and css are almost completely unsuited for building app ui. I wish they had been a little more broken, because then someone would have had the sense to replace them.
"Congratulations, now your app is inaccessible to screenreaders and doesn’t lay out properly on half the devices of the internet."
For most cases, it is quite easy, to provide a simple HTML fallback for that. But yes, it is an issue.
But if you do program it right, then it does lay out exactly and perfect on EVERY device, like you intented. (as long as it supports webgl)
Does it really? What would you hold up as an example of a powerful web app with significant usage?
All the major players these days (like Facebook) get the overwhelming majority of their traffic through their native mobile app, not through their web app. The web's primary stronghold is in publishing platforms (news, articles, etc...), which are not really what you'd call "apps" and indeed they are the one thing the web was actually built for and good at - static(-ish) content.
What would you hold up as an example of a
powerful web app with significant usage?
The web as a whole is one gigantic web app that works wonderfully. I am using the web all day and for pretty much everything. HN and the linked articles for news. Trello, Github, Upwork for collaboration. FB and Meetup to find an save events I want to attend. Booking.com to find and book hotels. My local railway site for train travel. My local cab site for cab rides. My local public transport site for bus/subway rides. Google flights for flight planning.... I could continue forever. Whenever I need something, I find it on Google and just use it.
"All the major players these days (like Facebook) get the overwhelming majority of their traffic through their native mobile app"
Source?
And I do not think, there is a sharp line between WebApp and Website.
You could say, we are using one right now (even though very primitive) discussing right now - more than "static(-ish) content"
Also most of the other parts of the Web are not really static anymore - highly dynamic, live interaction between the whole world! That's pretty awesome if you think about it. So the Web works and the web is faaaar away from static documents linking each other, what was it's origin.
And with real webapps - well, I believe the big ones are about to come. The underlying technology to really make them usefull, is just about to become stable.
Over 50% of facebook users only access it from their mobile devices.
> And I do not think, there is a sharp line between WebApp and Website. You could say, we are using one right now (even though very primitive) discussing right now - more than "static(-ish) content"
You could make that claim but it'd be a tough sell. Sites like this are clearly not what anyone talks about when they say "web app" or extol the virtues of HTML5. This site is static(-ish) content with hyperlinks to go to new static pages. It's not dynamic or highly interactive. Which is part of what makes it great, don't get me wrong. But it's clearly not a candidate for a web app, it's not clamoring for webasm, webgl, or any of those shiny new toys, etc...
> the web is faaaar away from static documents linking each other, what was it's origin.
How do you figure? Most people google something (either via a dumb web form or more commonly via a native app), then click a link to a static page.
Or this site, which is basically just an index page of yesteryear, and is literally just a bunch of static links to other usually static content.
> And with real webapps - well, I believe the big ones are about to come. The underlying technology to really make them usefull, is just about to become stable.
I disagree since the underlying technology really truly doesn't exist. The fundamental basics to making a responsive app still don't exist at all (no cheap concurrency, for example, to say nothing of cheap parallelism).
maybe it is hidden somewhere, but I only read "mobile users only", not "native app" only. So users who access by browser would be in that category as well
Maybe? Hard to tell since Google Docs on Android at least is a native app not a wrapper around the website and I'd assume that's true on iOS as well.
Similarly Microsoft Office on mobile is a native app, and I can't seem to find good stats for how Office on Windows/Mac compares to the Docs suite on desktop but my guess is that it's not a pretty comparison for Google Docs...
It's probably not realistic, but I would love to see the web be completely thrown out and replaced with something reasonable.
I write a decent amount of native code. I write Rust, C, and x64 assembly. I think I'm pretty good at this stuff. But the web is too much for me. Any time I think I'd like to do something with the web and sit down to learn, it's completely overwhelming. I've never been able to put together a coherent mental model of the architecture of a web application or figure out what the best practices are for web development. The amount of complexity you have to wade through to get anything done is just silly.
There's the idea floating around that web developers are less competent than programmers on other platforms. I'm only half serious when I say this, but I sometimes wonder if, to the extent that this is true, it's because web developers have so much incidental complexity to deal with that there's just not much brainspace left over for classical CS or software development concepts.
The thing is, though, the web is reasonable - for its intended purpose, which is displaying and interlinking documents. And despite the complaints, faults and shortcomings, it's even remarkably not as bad as it could be at being an application platform.
Throwing that out because it's less than optimal at a use case it was never intended to serve would be monumentally short-sighted. Just build something else and leave the web be.
It's truly mind-blowing how much energy has been wasted on trying to shoehorn the web into an app delivery platform over the last decade. To what end? To make the browser a general purpose platform? We have that already, it's called an "operating system".
Edit: that said, I disagree with many points and the general negativity in TFA
It's a great example of worse is better in action: a technically inferior platform winning out because it's better at one or two things that enable virality, which is the only thing that matters when all the money is looking for high growth.
In this case, it's that webapps require zero effort and time from the user to get started with, and allow developers to get the closest to the "write once, run anywhere" dream than anything else (if you're doing a decent responsive design, you can even get a good experience on both desktops and phones with much less effort and no gatekeeping), so the development effort is a lot lower.
These two attributes make it really hard for a native app to compete on growth terms with a webapp, since it has a higher hurdle for users, higher initial development costs to target the same amount of users, and higher iteration costs to ship (and get users to install) a new version. It doesn't matter that it's hilariously inefficient; as long as it's just below the threshold where the user tears their hair out, they're not going to jump ship.
Also web is open. I don't need Tim Cook's permission to run something. And despite this lack of walled garden, I'm much less likely to get something bad from web, than from app, because web have much better sandbox than app ever will.
Because your data is being held hostage by the service provider. No more grandmas showing photo albums to their grandchildren when Facebook is long gone 30 years from now :'(
But with dropbox and/or alternatives you can always access raw files via the/a file browser. With Fb, OTOH, there was a story some time ago where you only could download photos in downscaled resolutions, or where downloading native resolution files was hidden in obscure option menus.
Are you comfortable running arbitrary binaries built by arbitrary people? If not, then I fail to see how an operating system is a sensible general purpose computing platform.
It worked OK when we just wanted to run software written by a handful of trusted parties... Microsoft, Adobe, id Software. But as soon as there were 1000s of companies writing software that we wanted to try, running binaries ceased to be a good idea. I don't really trust any binary software on my machine that isn't written by Apple. But I will open basically anything in a web browser because I don't have to trust it.
Even now, with all the sandboxing, Microsoft and Apple still have to manually review software in their stores. And truthfully, app stores are basically a naive Web of Trust system. It's not safe at all. Applications constantly open up holes and then say "oops! Security bug!" and what... you're owned now? But Apple doesn't believe it was a maliciously placed hole, so it's all good? Hell of a security model!
Web apps are hardly a solution to this problem, those being always connected. If regular apps need permissions (for eg. opening photos on your device) then so do web apps. Are you comfortable with web sites sending your location home, or recording audio/video? If anything, web apps are too sandboxed to do anything useful.
The web isn't a solution to this problem. Drive by attacks are real and easy to place via malicious ads.
The only real "fix" to this situation would be to make software vendors actually liable for the correct functionality of their product. Imagine no more warranty disclaimers or other bullshit in the licenses for "final" products (compiled binaries, executable JS in websites...). All other engineering professions are legally held to their respective standards. It's time we start raising that bar for software as well.
I'd like to see some software company execs soil their pants because they know that their products are lousy crap.
It's the largest malware vector because it's the largest software vector. It still remains true that running a random website is a million miles away from running a random exe.
I still haven't found a reliable way to save webpages. Firefox and wget won't download stylesheets or script tags generated by javascript. The only working approach is to print webpages as PDFs. It's a pretty terrible platform for documents.
That's a good point. How do you create a "local working archive" of a website? It's a really good idea. Want to get started on working on some project like this?
Scrapbook extension for Firefox (one of the winners from either the first or second extension context IIRC) used to do this in a fantastic way.
You could download
- a page on it's own
- a page an all pages recursively for up to 3 levels
- optionally filtered by domain or path inside domain
- optionally including javascript (IIRC)
Sadly this is now broken in the new extension model and fixing it doesn't seem to be a priority.
Firefox is still my favourite browser by far but my enthusiasm isn't as strong as it used to be.
On the bright side even if it doesn't seem to be a priority work seems to progress on bringing the new extension APIs to a point where several of the old extensions can be recreated.
"Any time I think I'd like to do something with the web and sit down to learn, it's completely overwhelming. I've never been able to put together a coherent mental model of the architecture of a web application or figure out what the best practices are for web development."
If it makes you feel any better, that's because there isn't a coherent mental model. If you've ever heard of the ORM/Relational impedance mismatch, it's got nothing on the set of impedance mismatches between the way servers like to work, the HTTP protocol (and its still very page-based orientation in a world of streams), the browser's DOM model, and how Javascript works, especially if you want to get excellent performance out of it.
It is my opinion that this is why you see so much churn in the web world; the continuous iterations on client-side frameworks, server-side frameworks, this Javascript DOM library, that Javascript DOM library, now an integrated framework, now recommending assembling your own from bits and pieces... it's all a reflection of the fact that none of these pieces particularly work all that well together in the way we'd really like them to. There's a ton of possibilities, all of them frankly pretty bad in most ways but good for this one use case, but a different use case for each tech, and that's a recipe for a lot of churn.
My recommendation to anyone getting into this world is A: learn the basics of HTTP B: learn the basics of HTML C: clock some time with Javascript's basic DOM interface and maybe jQuery and then D: relax about the whole thing, unless you really think you're going to build an app that scales up to the tens of thousands of simultaneous users. The truth is that when it comes down to it there are still plenty of applications you can successfully build and deploy using completely 2005 technologies... and the dirty secret truth is that you may well beat someone to market who is over-invested in staying Up To Date and constantly throwing away all their skills.
(You will not beat to market someone who is judiciously staying up to date, and carefully picking and choosing what modern tech to learn and deploy. But you still probably won't be that far behind them, either. And that is not the person who is actually freaking everyone out about the web; it's the guy vigorously selling Vue.js or whatever modern thing as the hot new thing and that all previous JS libraries are now trash that should be used by nobody, when six months ago they were saying the same thing about something else.)
>If you've ever heard of the ORM/Relational impedance mismatch, it's got nothing on the set of impedance mismatches between the way servers like to work, the HTTP protocol (and its still very page-based orientation in a world of streams), the browser's DOM model, and how Javascript works, especially if you want to get excellent performance out of it.
Mapping objects to tables is a solved problem. The only noteworthy challenge is subtyping but it has easy solutions. I'm surprised how well it works even if you have an old crusty database with an archaic table structure. Compared to serialising objects to JSON or other formats that have no concept of identity it's downright trivial. However mapping objects to tables is the primary thing an ORM really does. Usually it implements lazyness for correctness so that your code while inefficient still works as intended.
What an ORM however does not do is write queries for you. Databases are remote devices, you can't just treat remote objects as if they were local (like CORBA did) if you care about performance.
Remember: You still have to write your queries but usually the ORM still helps you writing queries by providing a query builder or has it's own query language. The point of the ORM is that you don't have to manually marshall rows into objects, it's not a tool to avoid queries. It's right there in the name: Object Relational Mapping. It does not say AutomaticQueryGenerator or something related to queries.
Sure, but that's just the standard solution that hides the mismatch. The mismatch is still there. Which is why when performance becomes an issue, there's a whole bag of tricks for tuning the ORM or just sidestepping it and writing a sane query. It's also part of why the SQL database went from the only thing people could conceive of using to one tool among many for data persistence.
As 1970s paradigms go, SQL has had a good run. But the main problem it solves, easily finding and changing your data somewhere on a small number of spinning metal disks, is just not the central problem of computing that it was for a few decades.
A lot of the churn is because the web is so young as an application platform. It’s been less than 20years since GMail which was probably the first thing that even approximated an application on the web. Chrome was released in 2008 less than 10years ago, and it was the first time the web had a runtime engine which was perform at enough to even build an app.
There were plenty of webapps before Gmail. There was even plenty of other webmail services before Gmail and Gmail wasn't much different from the status quo. I've been using webmail since 1997 and even my University had a web interface (SquirrelMail) for those who preferred using the web interface (almost everyone). This was pre Gmail.
Really? My recollection is that Gmail was substantially different in the extent to which it was an in-browser Javascript app built around server-side data requests. That's in contrast to something where applications were a series of mostly-static pages.
My recollection was that Outlook's Web Access was the first. IIRC, it used what would become XMLHttpRequest when it was still internal to Microsoft, so it's the first of what we might consider to be modern web application. GP's 1997 seems right on, though, since that was the first release.
Microsoft invented AJAX specifically for Outlook Web Access so yes that is in fact the first "modern" web application.
As I recall there was a fight about getting it in and the creators called it XMLHttpRequest because XML was the hotness at the time and that got it past the project managers and PHBs.
Sure webmail had been out for a decade before GMail. But I believe the point was that Gmail was special in that it was a pioneering user of AJAX-style interactivity.
Pre-AJAX, web-based services were just a series of forms, and the interactivity patterns harkened back to "smart terminal" [1] form-based use for mainframes in the 1970s. I believe the user you were responding to uses "application" to mean something approximating the desktop application experience of the 80s or the current mobile app experience.
So the distinction they're drawing is about the kind of interactivity. Squirrelmail, et al, were a series of forms and pages. GMail didn't do a new page or frame load every time you looked at a new message. UI rendering and interactivity became client-only activities, with the server providing an API.
GMail was different because it offered an outlandish 1GB of mail space, when everywhere else you got 1-10MB mailboxes. Since it was launched on April 1st, most people did not believe it at first. And (at the time) huge space allowed users to keep and search all old email, instead of deleting old emails being a weekly chore.
Everybody here is too young to remember the era of the Microsoft monopoly and how much it sucked. The Web app era is much nicer. Besides Web Apps aren't bad compared to the 90s win32 api which was closed source and riddled with bugs and undocumented behavior.
I remember how impressed I was when Google Maps was released. Instead of having to click on arrows to reload the page with a new map, you could just drag and it would automatically load new map tiles!
Google Earth (the native application) is really old and abandoned by now, but its still better than the new "web" version that only works in Chrome.
Gmail was first massively used single page web application. Somewhat ironically it was built on top of technology meant for Outlook Web Access (ie. XMLHttpRequest as ActiveX component in IE and as an DOM extension in Gecko). AJAX is name that was conceived to describe that approach to webapps, with the somewhat funny fact, that google at the time described it as javascript (from time to time with the addendum: "done right")
Funny thing about AJAX is that this acronym means nothing. Asynchronous JavaScript? Well, JavaScript was always asynchronous. And XML? What does it have with XML?
Any chance most of the software you're writing is headless?
My experience has been the opposite. I first learned C in college, and loved making command line interfaces but never understood how to make GUI applications. When I was introduced to browser hosted front-ends, laying out interfaces for GUI apps seemed a lot simpler and made a lot more sense. Python was the first language where I was able to figure out how to write a GUI for an application running fully contained on a device. It might just be me, but it doesn't seem like a lot of programming language training paths emphasize the human user interface (command line, GUI, or otherwise). The web platform definitely does.
So weird. I also learned C in college, but Python I love and use exclusively from the command line; I'd choose to write GUI code in almost anything else.
But your central argument is right, that UI stuff is ignored in non-web languages at the expense of "core" concepts, and that in turn probably does lead to insecurity.
The reality is, it's historically been hard enough to find decent coders without worrying about myriad security concerns too.
Probably you spent several years at low level programming and you're fluid and productive in that world. In the other hand, I spent my last 10 years dealing with websites and I feel fluent and productive with the web, while I feel frustrated and overwhelmed by low level programming. The web is 20 years of quirks. Low level stuff is 40 years of quirks. If you want send me a message and maybe I can help you with learning web and you can help me with low level stuff :)
For me (not your parent commenter) the problem is not that I can't understand things or need help to grasp something new. I have 2 decades of desktop, low-level, server, business programming in bunch of languages and frameworks behind; didn't touch games and 3d though. The problem is that every time I start to read yet another html/css/js tutorial or advanced guide, I get almost physically sick of it. It is like learning [al]chemistry before analytical method appeared. You're presented to the fragmented facts, none of them covering the entire picture, none of them having any design thoughts. For first few times I thought that it is just a bad tutorial, but with time I realized that it's the nature of web. You can't do a right guess there. You can't metaprogram it, because there is no common basis between all the "technologies". You can add new unnecessary flavor though. Millions of failed frameworks represent the supporting evidence for that. It is so detached from programmers reality that even gives "powerful" names to reinvented things: services, routing, reactivity to name a few. Which are simply modules, callbacks and two-point bindings, the insignificant nomenclature under programmer's feet. Most web devs don't even know what real reactivity is and that is was a regular thing to have circular, heavy-threaded formula references as evaluation model in '84 supercalс working on 96Kb RAM. Do you know why it isn't widely used in today's programming? Because you normally have only one place to set your data and only one way to propagate it. You DO NOT need reactivity in a sane design. You have to be aware of your data flow and be able to analyze it. Native programming overwhelms you because it is saturated with disciplines you was never convinced to follow (or allowed to break for local benefit), not because it is twice as old. Just pick few classic programming books to introduce.
Web is long done, you'll never see it being any better than now, or yesterday, or a year ago. The article may be ranty, but it is right that web still reinvents the '90s having 100x processing power at hands. It simply goes nowhere. I don't hope, I know it will be dead some day, because that bubble becomes too heavy to not pop itself.
That's cause the web is full of hype. Try Python Flask. A three line Python function and you are going. No magic. Just request and response. It is easy. Bang out a model class and read the SQLAlchemy tutorial. The web and RDBMS with just enough magic. Screw HTML front ends. Write the ugliest HTML you want. Never spend time in HTML. Make your app / idea work. Get your data right. Front ends and modern front end tech stacks are an unholy time suck that offer little for hobby or small apps.
Maybe what parent meant to suggest is to wait with "proper frontend" and keep ugly quick&dirty minimal HTML until the whole software/app/system/server/set-of-services/whatever is "done", and just works. Because once you distract yourself with making it pretty, it's a rabbit-hole of ever-new, ever-different, ever-more-promising paradigms to switch to and fro. But not sure if that was his idea here..
Yes. Write simple but correct HTML. Don't make it look nice. Make the UX good. Don't spend time with crazy JS frameworks. Simple forms. Use properties for screen readers and usability if you must. When you are learning avoid getting sucked into JS and CSS holes.
Yeah but users don't read the HTML, the browser does, and it doesn't care that you're using ten <br> tags for vertical spacing instead of an elegant CSS styling property.
Edit : just want to make it clear that I'm trying to paraphrase here, I don't know if I agree (although I do want to point out that HN is built in this philosophy)
I’d argue that in this era of mobile devices, hard-set line breaks like <br/> are a bad idea even in the immediate term. Regrettably, the front end isn't simple any more, and while the tooling doesn't help I think it's simply not an easy problem to fix any more.
When you are learning spending time on the front end is a terrible time investment. Your first web apps are for yourself usually. This is exactly why the grandparent gets frustrated. Make an ugly app that works first.
Writing correct HTML helps people using screen readers because they can navigate a page using the descriptive HTML elements. In fact, writing correct HTML is probably the simplest and easiest thing to do when it comes to creating a web page. It's CSS that's needlessly complicated and unpredictable.
If you want to write web apps, save the HTML for last. When you are writing apps for yourself you write just what you need. That is what I am saying. Functional HTML first. It's not hard to write plain looking but very good UX. I think we are agreeing. I meant write stuff that works but is not fancy and avoid the big JS frameworks.
Yes, I agree. Sometimes (maybe often?) plain HTML and CSS with a little bit of Javascript (and server-side logic) works perfectly fine for many "web apps". It can even be faster (and simpler) than downloading all the app logic and a fat Javascript library to the client.
What did you try to do to learn web development? I'm sure there was something you could do to change your approach, if you're really capable of learning how to build complicated, networked GUI applications in Rust and C, you must be able to grok the basics of web architecture.
Honestly these comments that we should throw away the web are just as ignorant as people who think we should just throw away all of our C code.
I consider myself pretty good, too, but had to take a community college class to force myself to spend the time to learn website dev. It's an abortion. HTML/DOM/CSS + JavaScript? Back End / Front End? You simply must use a framework to do anything real. In class, we developed without frameworks, to see how the guts work. Like real guts, Web guts are ugly (although perhaps utilitarian) and not especially streamlined for their job; but given they evolved from simpler times, maybe pasting over the cruft with more cruft is our only salvation today.
I normally do embedded, and, to me, the Web Browser is just an app. Anybody could write an app that accepts text as input and output formatted text, graphics, pictures. It certainly could be optimized for within-the-app apps. I'm not sure why this hasn't been yet done. JSLinux is a proof-of-concept: https://bellard.org/jslinux/
I felt the same way yesterday about writing a little app to position X windows in preconfiged positions.... in Crystal using C bindings. I would have to learn C, make, xlib, and a while bunch of other things just for a simple, non-buggy executable. I just gave up and decide to keep using Crystal and make external POSIX (ie shelling out) calls to the compiled wmctrl executable.
You need to learn CSS just the way you would approach a learning another programming language. From scratch - without assumptions and short cut solutions you get from Googling. Understanding stuff like block, inline block and positioning and go from there. Take a month's equivalent out and spending time learning it.
Javascript too can be an extremely confusing language for those who come from typed systems.
Then comes the DOM.
Then there is the communication system - AJAX, websockets etc.
Finally there is understanding the browser dev tools. As the tooling has accumulated understanding how to use it and internalize everything will take atleast 2 - 3 days.
If you don't take time to learn these 4 systems independently, when they're all mashed up as in a web application, you will struggle .
I don't think that the web is complex, at least not in a sense I'm understanding "complexity" word. Web is a huge pile of semi-specified standards. There are bunch of written standards, like XML, HTML, DOM, JavaScript, CSS (each with multiple versions). There is a lot of tribal knowledge, that you'll get only with experience, like things that doesn't work with IE 6 (luckily IE 6 is not very relevant today). There is some security-related stuff, things that you should remember if you don't want to make a vulnerable website. But it's not a complexity, it's just a lot of things.
Now there's a bad tools. JavaScript is a bad language and its ecosystem is mostly terrible. If you want to create react app with hot reload using es6 and other things that modern developer expects to have, you'll end up with tons of configuration glues, some experimental hacks and many tools hacked together. Or you can download "starter" template, where those hacks are already glued for you. Good luck to add or change or fix something there. In contrast I can write very simple maven config and it'll support almost everything I would ever need without any configuration. It's difficult to navigate among those tools, but this difficulty will be solved with better tools and better documentation. I'm eager to wait until I can throw webpack and just write ES7 with imports and browser will understand it. I shouldn't need build tools for web.
That is complexity. It's layers upon layers upon layers of dynamic runtime interactions. Try working on maintaining/extending a web application that's been developed over the course of many years, during that course of time using all of those technologies in many different versions, with the many different styles of architecture and best practices that were 'the right way' to do it at the various points in time when those parts of the system were added. You'll find all kinds of complexity in there, even without looking at the server-side.
Then you mention the modern ecosystem of javascript tools with all of its glued-together hacks, dependency nightmares, grey boxes of 3rd party code that you can look at but don't have the time to understand fully (all of which use different styles/techniques). That is complexity.
The alternative as described by the article is traditional development where you have a language+IDE+GUI design tool that compiles applications into a single file that, if it compiles, just works (except for any bugs). People who haven't developed in Delphi or similar environment have no idea how much less complexity there could be.
You say that the web needs to be completely thrown out, yet you decided that your foray into web development should be to build a thick Javascript web application? Not some simple Flask endpoint?
Because I assume you could figure out a basic request/response server, and your issue is that you dove into something like Webpack + React + Flux + Qux + Fux + Foo.
That's meant that way. What better way to get rid of annoying independent programmers like you (and me) than by setting things up in such a way that you need a large company to make any headway at all? That's just another form of lock-in and a way to deepen the moat for upstarts who are historically the most dangerous types for established players.
I think total need for developers in the world increased so rapidly that we couldn't find people with enough experience and teaching abilities to educate new developers properly. If what Robert C. Martin claims is true, number of developers in the world doubles every 5 years.
So, I really think there is a competency issue here. However, I don't believe that it is restricted to the web; it is true for every platform. It is just that learning web technologies seems to be a better choice if you are new to programming as one can use those skills for almost every platform. (yeah you can use almost every language to develop for multiple platforms, but come on even Microsoft is using web tech for vscode)
It doesn't have to be complicated. A single page app implemented with React, calling some REST Api to retrieve and store its data, is basically all you need.
My two cents as somebody who's made the shift from C++ & Java to web dev over the past year or so:
The complexity really depends on where you start. Part of what muddies this with web dev is how many resources there are. If you look up "what front end developers need to know in 20XX", you get a dizzying amount of results. Learn React, Redux, SASS/LESS, Angular, Express, Webpack, Docker, etc. There are lists upon lists and tutorials on tutorials.
A while back, before Google made it a standard feature, I did a project that took a Google maps route and found gas stations along said route, giving you back a list of stations with prices, etc. You could select a station, and it would update your route automatically. The whole thing was vanilla JS and Node (ok, I used jQuery for AJAX requests). No frameworks, no build tools, just plain old Javascript.
As I got deeper into Node, though, I found myself taking advantage of frameworks and packages naturally because they solved a problem I'd previously encountered. React makes things like dynamic lists of gas stations much easier to organize and keep consistent. Preprocessors take a lot of tedium and guesswork out of CSS. All of these things are an important part of being a "Front End Developer" because they make development easier to maintain, structure, and build upon. They don't change the fundamentals of what you're doing.
This isn't really any different from being a native developer. A few years back when I was trying to make a super basic C++ GUI application. I kept bouncing between GDI+ to GDI to Direct2D to Direct3D to SDL to OpenGL, etc. I was too focused on trying to find the appropriate tool that would conform to my expectations of how the app "should" be developed, and I gave up. I didn't have a good sense for what problems those things solved, so of course I had no idea why I would use one over the other or which stack was best for my use case. A little while ago I took a stab at graphics programming at a much lower level, spent some time with DirectX and OpenGL, and I would approach my C++ app idea much differently now because of that knowledge.
I think anybody who gets into web dev by trying to learn frameworks is going to have a daunting time. Try making your app with vanilla HTML, JS, CSS, and a simple Node server (don't even bother with express, just use Request and localhost). Look at what was tedious or difficult about it, then go find a framework that fixes that thing. All these tools build on each other incrementally like that. Don't start with React, make it all in HTML, then make the incremental transition to Handlebars, then make the transition to React (for example). The vanilla stuff won't make you a front end developer, just like how me writing something in OpenGL doesn't make me a "graphics programmer", but it will give you the foundation required.
I sympathize with the sentiment, but the web app only sucks if you're using the stuff that sucks.
Like any technology with decades of evolution it has a thick sediment of peat. Half of Javascript, half of Windows, even half of *nix is garbage you should never use, but it's all there because old things would stop working without it.
It's just that the web has a very low barrier to entry and very high reach, so the compost doesn't get thrown out as quickly as it should. So people still pack jQuery when they need to select elements, or pull a left pad from npm without realizing it's in the language core. Or pack Reactiflux when they want to do a form.
In an age where you can literally compile existing, GPU-heavy C++ code to WebAssembly and run it in the browser with no fuss, you can't complain the web doesn't let you do things right, or at least the way you want. It's just admittedly easy to hop on the wrong library bandwagon and complain when things go wrong. But it's not a problem with the web.
> but the web app only sucks if you're using the stuff that sucks.
Please show me anything that doesn't suck on the web. And yes, I've been doing web development for close to 17 years now.
There's almost nothing that doesn't suck on the web. The languages, the tooling, the platform - you name it. It is good for one thing, and one thing only: displaying single-page interlinked documents with little to no embedded media. Any and all other attempts to make it do anything else end up bloated incomplete internally inconsistent overlapping monstrosities.
But you can't. The moment you need to do a layout, you're back into html + css and it's inabilty to cater to anything beyond single-page documents (not apps) ;)
Having written things targeting WASM, when you can provide me an environment like Visual Studio that has breakpoints(including data) and a debugging I can step then we can talk.
Until then WASM is cool, but not nearly as productive for C++ as the native platforms.
We could've had that years ago if Mozilla had not (as all the browser vendors do depressingly often[1]) decided to torpedo NaCl for nonsensical reasons that boil down to "NIH," in favor of creating a far-inferior, crippled spec practically designed to be aimlessly bikeshedded for years.
[1] Mozilla usually pulls such NIH moves to sabotage the introduction or use of languages (even DSLs) other than JS on the web. See also: WebSQL, Dart, HTML5 vs plugins, HTML5 vs XHTML2. Whether you agreed with their position on those disputes or not, you have to admit there's a pattern.
Again, not picking on Mozilla, everyone's an offender: Microsoft generally slows things down so their browsers don't get too outdated, Apple pursues vendettas against competitors and is myopically focused on moving mobile forward while neglecting desktop, Google's constantly attempting to muscle through user/privacy-hostile misfeatures and highly-specialized features that improve their own web apps more than the web as a whole.
XHTML2 was dead on arrival. Mozilla didn't do anything to sabotage it; it sabotaged itself by completely dropping backwards compat in such a horrible way that you could only implement XHTML2 or something that would render existing websites, but not both, unless you jumped through some pretty ridiculous hoops. It turned out, no one wanted to jump.
Plugins were killed by most browser vendors more or less at once, and Mozilla wasn't even the first one.
Dart was opposed by every browser vendor who wasn't Google, though the reasons may have differed. Same thing for NaCl.
WebSQL is the one thing that I am aware of that Mozilla in fact opposed when others were broadly in favor, but the reason was not NIH. It would have been pretty simple to implement WebSQL in Gecko. The opposition came down to two things, I believe. The first was a simple observation: the W3C process at the time required two interoperable independent implementations, and there weren't any for WebSQL; the only implementation that the spec allowed, if you were going to achieve interoperability, was a particular version of sqlite. There were various ways to solve this problem, including abstracting away the database more (i.e. developing an actual Web SQL with well-defined semantics that were not tied to a particular implementation), but none of the WebSQL proponents were willing to go ahead and put in the time to do that, as far as I can tell. The second issue was the fact that WebSQL had synchronous database queries going on. The storage API really should be async, if it's going to be accessed from the "main thread" (the one the Window object lives in). I do think we could have done better than IndexedDB, though. It, like many other recent web specs, feels way over-engineered to me.
[Disclaimer: I work for Mozilla, and did back when most of the things you mention were being discussed, but was not actively involved in the WebSQL/IndexedDB discussions.]
> Dart was opposed by every browser vendor who wasn't Google, though the reasons may have differed. Same thing for NaCl.
Heck, Dart was opposed by the Chrome team: there's a reason why it never made it into Blink. NaCl is slightly different insofar as the Chrome team didn't actively fight it.
XHTML was not "dead on arrival"; that is some seriously fabricated FUD. XHTML came at a time when it was entirely positioned to take over as the proper way of doing things. It had its own mime type to differentiate itself from HTML, in order to allow older content to continue to be served as soup during a deprecation phase. The demand for this strictly validatable syntax was incredible; it was absolutely in a place where it should (not could) have become the new standard.
It wasn't us web developers who rejected the call to action. We were begging the other vendors to add support for the XHTML mime type. I spent two years of my career preparing for the transition that never came. We were at the point where we served a different mime type depending on the requesting user agent, having refactored everything to return perfectly compliant XHTML responses. That is how seriously the industry anticipated the changeover.
It was the browser vendors who turned a blind eye. The childish browser wars, throughout which each company refused to cooperate with the competition out of self-interest to hoard the market, mutilated the web. Had the vendors all agreed to support XHTML within a span of 6 months, today we would have 100% well-formed XHTML. Instead, browsers still parse meaning out of LITERAL GARBAGE. HTML soup is so pathetic that there are no words to describe it.
Please show me a programming or scripting language that allows you to write code with syntax errors, whereby the compiler or interpreter never throws an error, instead taking a best guess stab at what you meant to code. It doesn't exist, because... SURPRISE - the level of absurdity required to permit such a thing is unfathomable. And yet that is exactly what we have with html5.
Aside: what the actual fuck is up with CDATA elements still being required to be CDATA. The fact you have to write <script src="/main.js"></script> instead of <script src="/main.js"/> is the only thing someone needs to know in order to understand the disgusting origins of the "modern" web.
> XHTML was not "dead on arrival"; that is some seriously fabricated FUD
Are you talking about XHTML in general, or XHTML 2 specifically? They're not the same thing. I was talking about XHTML 2 specifically.
> It had its own mime type to differentiate itself from HTML
XHTML 2 did not have its own MIME type to differentiate itself from XHTML 1. This was precisely the problem, because it used the same MIME type, same namespace, and same localnames to mean different things from XHTML 1.
> today we would have 100% well-formed XHTML.
We can have a long discussion about XHTML 1 and whether it would have seen better uptake with better support. I will only note that all browsers support XHTML 1, with the XML serialization, and have for years. And similar for HTML5 with its XML serialization. Yet neither one has any uptake...
I should also note that your "browser vendors" lumping-in is a bit weird. The only browser vendor that did not support XHTML was IE (admittedly a large fraction of the market, which made deploying XHTML hard). But you make it sound like there was some conspiracy of browser vendors to ignore XHTML, when in reality all of them except Microsoft implemented it fairly quickly.
> The fact you have to write <script src="/main.js"></script> instead of <script src="/main.js"/>
You don't. <script /> is valid, but in XHTML. If you don't get the mimetype right, and the browser isn't parsing you as XHTML, it won't work.[1] In HTML5, self-closing tags are only valid in particular contexts, and this isn't one of them.[2] (Really, for the HTML tags, you can pretend that self-closing doesn't exist in HTML5, so no <script />. Since script sometimes has content, it needs a closer, so </script>. I do find it as annoying as I suspect you do, however.)
The point I was making is that nobody uses XHTML thanks to the browser vendors' refusal to accommodate it early on when the demand was rampant. By the time the comparison was html5 vs. XHTML2 instead of HTML 4 vs XHTML1, it was too late as we had been trained to ignore the XHTML variant due to the vendors' absolute refusal to even make XHTML1 work. If you know of a single major site (not somebody's little side project) that uses the XHTML mime type, please share so I can be amazed.
The fact that the html5 spec does not permit self-closing CDATA elements is precisely the kind of legacy trash we'll be dealing with for yet another 10-30 years. (I understand that html5 didn't change the parsing rules from HTML 4 in order to be backwards-compatible, but it's still infuriating).
> The point I was making is that nobody uses XHTML
I don't disagree here.
> The fact that the html5 spec does not permit self-closing CDATA elements
The HTML spec does permit self-closing <script>: in the XHTML syntax.
The HTML5 specification defines two "concrete syntaxes" for HTML: HTML, and XHTML. The latter supports self-closing <script> tags perfectly fine.
The former (the HTML syntax), only allows self-closing tags in two contexts: void tags (of which <script> is not), and foreign tags (e.g., SVG, and XML-like stuff). Now, perhaps you can argue that they should just have allowed it on all elements, such as <script>; frankly, I feel like the reason the standard permits it on void elements at all is just to handle the legions of webdevs out there who think they're writing XHTML but only ever use the syntax for <br/> and are incorrectly serving the resulting soup with text/html.
But, if you're writing the HTML syntax, just write the HTML syntax. Some elements require the end tag, some don't. Typically, it is simple enough to tell, simply by asking "could this element have content?" (if yes: end tag, else: no end tag) If you want more consistent parsing rules, that's what the XHTML syntax is for. (Though I agree, it doesn't seem to see much real-world use.)
(Frankly, I greatly prefer the gentle fallback of the HTML syntax compared to the hard error of the XHTML syntax, which is considerably user unfriendly.)
> By the time the comparison was html5 vs. XHTML2 instead of HTML 4 vs XHTML1
The relevant comparison is html5 in its HTML serialization vs html5 in its XML serialization. The latter works in every single browser, and has since IE9 shipped in 2011. No one uses it.
> If you know of a single major site (not somebody's little side project) that uses the XHTML mime type
There aren't any, because I suspect people building such sites all discovered the same thing: ensuring well-formedness is _hard_ in practice, and if it's required for the page to be shown at all, then your page will fail to be shown every so often. And no one wants to deal with that.
Back when some people were in fact trying to use XHTML on the web, every so often you'd run into this on some site that sent XHTML based on "Accept" headers. You'd load the site in Mozilla (suite, then Firefox when it came into being) and get an XML parsing error.
There were two common sources of this problem. First, someone editing a template and forgetting to modify closing tags to match opening ones. This can be solved with server-side enforcement of template well-formedness, of course. But it means you can't have your start and end tags in different parts of the template or different templates, which people wanted to do.
Second, insertion of content you don't control, whether it's user-contributed, or coming from some other team (e.g. content-production team on a news site feeding their bits into the CMS templates), or coming via a content provider like the AP or whatnot. You can mitigate this by using a fully DOM-based workflow, serializing before you put on the wire, instead of pasting together strings. But now you have the problem of producing a DOM from whatever non-well-formed garbage you were handed. Yes, you can just reject non-well-formed input, but if you have no leverage over the producer of that input, that just means you can't do your job. OK, so maybe you have a more liberal parser on the input end and then ensure everything internally operates on trees, not text.
But the upshot in the end is that you end up with a lot more effort and the benefits are not entirely obvious (at least not entirely obvious to your management; there are certainly obvious anti-XSS benefits to having good control of what tokens end up in your output and where escaping happens, etc). So the path of least resistance is to just not go there in terms of the XHTML serialization of HTML.
> The fact that the html5 spec does not permit self-closing CDATA elements
I'm not sure why "CDATA element" is important here. You'd want self-closing <style> and <script> but not self-closing anything else? The idea doesn't even make sense for <style>, so presumably you just want self-closing <script>?
>>>the W3C process at the time required two interoperable independent implementations, and there weren't any for WebSQL
This is a convincing argument that's a stupid rule, not a convincing argument against WebSQL. Standardization processes are and ought to be a means, not an end in themselves.
If standards body rules are blocking progress on new features which are eagerly anticipated by developers and significantly improve the experience for users, that means the rules are broken. Standards bodies work for the community, not the other way around.
Also, this particular bit of standards-lawyering was a blatantly-hypocritical dodge. Virtually every web technology was first implemented in one browser before it was in others.
This is an all-purpose, substance-free objection that could've been, and can be in the future, made against any significant web technology, including those promoted by Mozilla.
Also, every browser, including Firefox, implements IndexedDB with ... sqlite.
One of Mozilla's actual arguments was "we surveyed front-end webdevs, and they said 'ZOMG, SQL isn't webscale!! XD'" Apparently browser development is to proceed on the Idiocracy principle.
But leaving that aside, Apple and Google did their own surveys and found that developers (who had actually used or knew of WebSQL) were overwhelmingly positive. Whereas impressions of IndexedDB are overwhelmingly negative, especially vis-a-vis WebSQL.
To this day, 7 years after its deprecation, and still never having been implemented in a Microsoft or Mozilla browser, developers have voted with their feet for WebSQL; it remains far more frequently used than IndexedDB.
Even as a cross-browser solution, the default remains LocalStorage while IndexedDB languishes in much-deserved obscurity.
>>>the only implementation that the spec allowed, if you were going to achieve interoperability, was a particular version of sqlite.
Good thing then that sqlite is one of the most mature and stablest programs in existence. sqlite's query API has broken backwards compatibility less in 17 years than most any web API does in 5. It wouldn't even be particularly burdensome to track sqlite in near-real-time.
Implausible worst-case scenario, you have to fork sqlite at a specific version. sqlite currently has 3 part-time maintainers.[1] The costs associated with maintaining a fork would be a pittance for an organization Mozilla's size.
Also, sqlite is free software, unencumbered by patents — there's absolutely nothing preventing anyone from making their own independent reimplementation of sqlite, it's just that there's no reason to because the original implementation is comprehensively battle-tested and of excellent quality by any metric.
sqlite is so good that, forget about sqlite's dialect, nobody feels the need to develop a competitor in its niche of embedded RDBMS, period. This is an excellent reason for using sqlite, not against.
>>>There were various ways to solve this problem, including abstracting away the database more (i.e. developing an actual Web SQL with well-defined semantics that were not tied to a particular implementation), but none of the WebSQL proponents were willing to go ahead and put in the time to do that, as far as I can tell.
I'd have preferred an ActiveRecord-style API, which in addition to being more ergonomic also would've been independent of a specific backend, but you can't let the perfect be the enemy of the good. Or abandon both perfect and good in favor of unusable garbage.
>>>The second issue was the fact that WebSQL had synchronous database queries going on. The storage API really should be async, if it's going to be accessed from the "main thread" (the one the Window object lives in).
This is incorrect, WebSQL's API is entirely async. But even if it weren't it wouldn't matter because it's blazing fast, around 50x faster than IndexedDB and as often as not it's the Javascript engine that struggles to keep up rather than the reverse.
>>>I do think we could have done better than IndexedDB, though. It, like many other recent web specs, feels way over-engineered to me.
That's one of the less colorful ways of putting it, yes.
This one's getting quite long so I'll address your other points in a separate reply.
There's a good reason for the two-implementations rule: it's to make sure the standard is sufficiently clear to be actually implementable.
> Also, every browser, including Firefox, implements IndexedDB with ... sqlite.
But abstracted away. And, importantly, not tied to a particular version. So if there's a security bug in sqlite (yes, I know, rare), you can just fix it without changing the web-exposed behavior in any way, for example.
I understand that you like sqlite. But it's not clear that having a web standard that says "yeah, just ship sqlite" is the right thing. For one thing, that requires you to ship code in a particular language (C). That's usually something standards try to avoid.
Erm, "Mozilla usually pulls such NIH moves to sabotage the introduction or use of languages (even DSLs) other than JS on the web." maybe it is also because their ressources are limited and implementing a new language like dart is somewhat expensive. And NaCL even more I think.
But I actually have some anger as well on them for enforcing indexedDB and killing everything else. So now we have to use f indexedDB for storing things locally. You apparently would have prefered WebSQL, I would have liked FileAPI.
Still, I don't believe it is out of evil attempt, just limited ressources mixed with stubbornness.
"Google offered to donate their engineers' time to implement Dart and/or NaCl in Firefox and Mozilla still refused."
I did not know that - that is really a bad move.
"while Mozilla was napalming skyscraperfuls of $100 bills with their Firefox OS nonsense."
Well, I do not believe Firefox OS was nonsense. It was maybe just too ambitious and too many mistakes were made, like focus on low end. But allmost all of the development for Firefox OS directly helped also the Web in general, because most work, was work on WebStandards anyway. Just the ones for calling and sms you cannot really use, but all the other things like battery were beneficial.
The web is way worse and less coherent than something like Cocoa, even though Cocoa is older. That’s because the former was designed for text documents.
Slight bit of pedantry: only the AppKit (1989) component of Cocoa predates the web (1990) while Foundation and Core Data (both 1994) came later. Of course, many nowadays-integral features of the web also came after its initial release, such as CGI and the <form> and <img> tags (all 1993), cookies and HTTPS (both 1994), Javascript (1995), HTTP headers, methods other than GET, non-ASCII text encodings and CSS (all 1996), AJAX (1999), and the <video> tag (2007).
Anyhow, I'm in full agreement. The web honestly isn't even that good a design for [hyper]text documents; HyperCard (as one example among many) was a great deal better, and better for graphical and multimedia content, and for applications too. Of course HyperCard wasn't cross-platform or served over a network, but it easily could've been adapted to be.
I don't get the section on JSON, which seems to assert that XML is more secure than JSON. It does this by linking to a Wikipedia page that includes the security consideration that you shouldn't call eval on JSON. True, but at least that's a tractable problem. Your linter can check for code that calls eval.
In contrast, there are plenty of XML attacks (DOS with billions of laughs, entity references), and parsing XML is a lot more complicated, which matters a great deal if you're using non-memory-safe libraries for parsing. Also, because XML is so general purpose, you get things like libraries allowing deserialization of arbitrary objects from XML, which is a security nightmare.
That last point isn't a clean win because sometimes the same library will handle JSON and XML, and so you have to audit the use carefully. However, if you're sure that your serialization libraries only use JSON, its simplicity means that it shouldn't have that kind of deserialization vulnerability.
P.S. If you want to rag on JSON, that's fine. It's not a great format. But "it's less secure than XML" is not the tack I'd take.
"If you want to rag on JSON, that's fine. It's not a great format"
JSON is simple and powerfull.
It's success justifies it, to call it a great format, I think.
But I am curious, what whould be a great format, in your opinion?
JSON gets the most important thing right: it's simpler than XML. I probably should have rephrased my comment. I literally mean that it's a decent format, but not a great one.
As a data-interchange format, it lacks first class dates/timestamps, the issues about specification are a pain when dealing with numbers (depending on language, you may not be able to use the full range of 64 bit ints), and has no builtin concept of schemas. That said, I'm not sure what I'd recommend above JSON, though you clearly wouldn't use it for really high performance servers.
It's also used as a configuration language, where more issues surface. It doesn't accept trailing commas or comments (I can see the argument for requiring all keys to be quoted, but that's a pain to write as well). For configuration, I think I prefer TOML to JSON. Dhall looks cool, but I've never used it.
I think it's weird to criticize JSON the format for lacking these specific features. I think there is absolutely an argument to be made that its simplicity often means that it's not the right tool for the job. There are a lot of developers who are stuck on making JSON do horrible things and end up reinventing the wheel poorly because 'XML is gross' but I wouldn't say the right answer is to add these features to JSON.
Yes, your criticism is very valid and I agree to it - I still think JSON is a great (simple!) format, though.
And I hope, that all the missing things like schemas can and will be added at some point, when there is a consensus on how ...
And TOML and Dhall look interesting, but I like the block approach with brackets to data like JSON does it.
A standard way to at least represent common data types, dates/times, integers/longs and fixed precision decimal would be nice. I know that the JSON format doesn't specify any limit or precision limitations on the size of numeric values, but almost all libraries assume they should be read as doubles. It is a real pain for people doing analysis on semi-structured JSON after apps have been deployed for a while and the data model has evolved over time.
>In part 2 I’ll propose a new app platform that is buildable by a small group in a reasonable amount of time, and which (IMHO) should be much better than what we have today... Next time: how we can do that.
i look forward to that article. This one, on the other hand, seems a little pointless. Does the web have problems? yes, absolutely. But I have a hard time believing the best way to solve them it to tear down everything we've built so far and start over.
People seem to think that there was a time when the internet was better than it is today. Well, I've been 'online' since before the web was world wide. Frankly, it was never good.
In fact, it is better now than it has ever been. It's just people choose to use the worst parts of it.
I've seen the various tech that was supposed to rebuild and revolutionize the web. It's just created more kludge. It's just lipstick on a pig. It's just one more set of standards that get half-ass implementations and even worse support.
If you tear it down and rebuild it, it's just going to end up the same except it is using different names for the protocols.
I'm not angry when stuff breaks. I'm amazed it works at all.
And, truthfully, I kinda like it the way it is. We have, at our fingertips, vast amounts of information and entertainment. It works, after a fashion and for some definition of 'works.' If the Internet sucks for them, maybe they should look elsewhere? The Internet is huge. It's not hard to find parts that don't suck.
Tearing down and rebuilding isn't going to work and nobody is going to invest in that. Hell, we can't even get ubiquitous IPv6 adoption. Not one browser is fully compliant with HTML5. And it's okay. It works, mostly.
Everyone has a different definition of what's good and what's bad about the web. A lot of smart programmers seem to think almost all software is bad. Probably all software that is actually used is not as good "as it could be." Any evolutionary process is going to be like that.
HTTP and HTML were absolutely not designed for many of the things they are used for today. A bunch of really smart people probably could come up with a much better solution for modern usage, and lots of them have tried. But the web has too much inertia (the users are there and don't care about these problems) and, as you say, it more or less works, or can be made to work.
It does seem inevitable that it will be superseded eventually, but how far off is that?
Whenever an OS depreciates, but not kills, something in an API, somebody ignores that it is depreciated, doesn't use the new method, and writes new software against the now-depreciated function.
There are people still using legacy software that got was first written in the 1970s. Someone took that software and converted it from punchcards to hard drives and from memory that was a spinning drum to memory that is solid state.
Somewhere, there are COBOL developers still maintaining stuff older than many of the folks that frequent HN.
I suspect you're right, in that it will be superseded - but I am willing to wager that it is going to take a long time and never be completely done. There is stuff that hasn't been updated since the 486 days and is mission critical. Fortunately, it works - because nobody has any idea how to fix it if it stops working.
As a society, we've accumulated so much technical debt that we may have reached a tipping point where it's simply impossible for us to catch up and it's unrealistic to think we will burn it to the ground and rebuild.
I suppose some external force could crash the house of cards but I suspect we'd just rebuild it with new faults or the same old faults.
Like you say, HTTP and HTML weren't meant to do this. Now we have webassembly, HTML5, and JavaScript libraries that nobody fully understands. We've now tacked on DRM to the standards, put the real functionality in the hands of ICANN, and crammed our data into towering silos of proprietary goodness.
We had a brief moment where we largely owned our devices and our data. Now, we lease supercomputers for our pockets while giving control of our data to a mysterious entity known only as The Cloud. 100 years from now, nobody is going to know how it works and we will attend churches where we pray, sacrifice, and tithe to the god known as The Cloud.
It will be superseded, but it will be just another kludge patched on top. It's like cars in Cuba. They are old and functional, but contain engines from a Lada, bumpers from a bus, seats from three different cars and a horse drawn cart, an exhaust made from tin cans, and four different size wheels.
And you know what? Those cars are a testament to the resiliency and skill of the Cuban mechanic. They are awesome. It's not amazing that they break down, of course they do. It's amazing that they run at all.
On a more serious note, I suspect well just keep patching and tweaking. Eventually things will get better. It has been steadily getting better this whole time.
I like to complainand point out the flaws, but it really does function. It's great and the immediacy of information has been a great asset for humanity.
The Internet really is better than it has ever been. Searches used to be done by a human. As in, you'd send them your question and they'd go through their directory, make phone calls, contact institutions, and get back to you with an answer - usually 3 days latter. Yup... Three days to get an answer. Sometimes, you had to wait for a system to come online, usually a small localized network, and only then would your email be delivered.
It works. It's like a dysfunctional family. Loving, possibly abusive, but our family. I suspect it will continue to improve, slowly but surely. Smart people are constantly innovating and improving. Standards and specs get refined.
The Internet, being vast, means there is a place for pretty much everybody. It has it's warts and there are legitimate complaints, but sometimes it actually does what it is supposed to do, when it is supposed to do it. Sometimes, possibly by accident, people make good choices that result in good things.
Also, cats... So long as we have cats, the Internet will be just fine. Gotta love it, warts and all.
Kudos for "lipstick on a pig". My thinking entirely. I don't really understand how the web got so broken. But you're right it's not going to be torn down any time soon, any more than the human eye will unevolve to fix the blind spot. Like biology, the web will evolve with kludge upon kludge, as long as it kinda works.
I think the best hope is for some language/platform that abstracts the whole thing away safely and lets you pretend it's not there. (And yes, I'm sure there are hundreds of these already. We just need to all agree on one!)
Oh yes, one more spec or standard should do the trick!
On a more serious note, it has it's faults but it works. Maybe people are asking too much of it? Maybe they need to adjust their expectations?
There are lots of ways that it can be improved. I can think of one innovation coming down the pipe and it impresses me. HTML5 doesn't, by itself, really impress me. WebVR, or whatever they are calling it, doesn't impress me - I remember VRML and the fiasco that was. No... The new DRM spec doesn't scar me - I figure it's just going to give a standard interface to what is already going on.
What does interest me is the webassembly. That I find interesting though a part of me expects it to end up similar to Java applets from back in the day. It interests me because I am expecting it to be a boondoggle.
The rosy-cheeks on the starry-eyed youth have assured me, quite breathlessly, that this is a game changer. This time, this time it will be different. We're finally killing Flash and they've gone and reinvented it. I'm probably going to have to add another 32 GB of RAM, just to use a browser. But, it's going to be different this time. They've got a plan.
So, I'm interested in seeing how that turns out. I'm the quintessential optimist. I have every hope in the world that someone will come up with a way to selectively block it.
Really, the web is doing okay. When we stop and look at all the crap we've shoveled ont TCP/IP, I'd say it has held up nicely. I really can't think of a single bit of tech that has taken more abuse than TCP/IP. HTML and CSS are up there, but I'm pretty sure TCP/IP can lay claim to the most abused spec.
Yet, the 'net lumbers on. It's kind of amazing and it is a great time to be alive.
Well... that's the deal with unfixable things. You tear them down and try again from scratch. Sometimes a step back is a giant leap forward. But, regarding the comments here suggests that web developers will never cease to patchwork the web. So, there is no point in arguments, just do it and make it better. At some point web developers will realize that they were trying to fix a sinking ship.
People used to do that, in 90s CGI scripts were usually written in C, or even assembly. And let me tell you since I'm old enough to remember it: no, it wasn't a great experience at all. It was actually quite horrible for web developers from the today's perspective. Development was slow and painful and hard to debug as hell. Also, it wasn't secure at all, hacker usenet groups were all about stack overflows back in those days.
One of my first jobs in late 90s was a complete rewrite of a huge web app into perl. It was originally written in pure C, and it had so many security and stability issues due to bad castings and stack overflows and null pointers and all that usual C stuff, that today it'd be considered completely unusable (back then corporate users were far more tolerant I guess). Perl rewrite fixed it all, no stack overflows, no worries about casting every input every freaking time, no sql injections (perl DBI used prepared statements), everything worked like a charm. And it took us only a fragment of the time it took for the original development. Programming cycle was like 10x faster since you didn't have to compile it first (just that was worth it), code was easier to read, we were much less likely to make stupid errors, etc. That's why everyone moved to perl and then php, python, ruby, etc. in the first place. They are simply better tools for the job, history has proven it already like 20 years ago.
And can anyone still read all that Perl? I used to speak Perl but I know I'd be far better able to understand some C I'd written 20 years ago than any of the Perl I did back then.
Fun fact: The colorful Windows 98 explorer sidebar that is shown in the first screenshot was implemented as HTML (and Javascript), along with the desktop background (called active desktop).
Actually the folders themselves were too. You could edit the actual CSS and HTML behind them – changing the colors, putting in backgrounds – by placing specially named files in the directory. Windows 98 was a glorious thing.
Indeed. I once built an “Active Desktop” web app — if you will — for my mom as a holiday gift. It just rotated through some pics of the family as wallpaper.
I remember when the services supporting that UI would crash and you'd just get a generic, blank webpage with one link as a desktop. Those were fun times. And boy...Outlook back then would do some wacky stuff. I seem to recall you could trick it into executing javascript in a system context which was hidden in png attachments or something to that effect.
An article about the web and it doesn't mention quintessential terms like 'link', 'network' or 'platform independence' even once.
Do you know why your 1990s app was so much 'better' than today's web apps? Because it was only supposed to run on Microsoft Windows, and a specific version of Windows at that!
The same applies to layouts. If all you have to deal with is SVGA and Windows 95 layout constraints are a piece of cake.
> Really impressive software would be embeddable inside Office documents, or extend the Explorer, or allow itself to be extended with arbitrary plugins that were unknown to the original developer.
Those were only reinventing the datatypes system introduced in AmigaOS in 1992.
> In part 2 I’ll propose a new app platform that is buildable by a small group
Yeah, because that's always worked so well in the past.
> there’s no Web IDE worth talking about
Save for Intellij IDEA / WebStorm, Visual Studio Code ...
> Because it was only supposed to run on Microsoft Windows, and a specific version of Windows at that!
I worked on an ebook editing tool in 2013 that only supported the two most recent versions of Chrome. If you have a product that is truly valuable to your customers, they will install dependencies to make it work.
Firstly, rewrites never work and people won't abandon a working solution for a "better" one unless it has a killer feature.
But the real thing is: the "web platform" is not the result of a design process, it's the result of a war. It's the stalemate point between so many competing technologies. It's the no-mans-land between warring monopolists.
Any monopolist could have given us a "tidier" solution (although probably not secure either!) We could have had the ActiveX future, or even a global Minitel system. The web is uniquely in persisting without yet fully falling to any "winner takes all" effect.
I strongly disagree with the part about REST being a workaround for browser limitations. REST is about manipulating an arbitrarily large namespace with a small number of verbs; an elegant and powerful design pattern that actually deserves a better platform than what HTTP provides. It transcends browsers and protocols and will be reinvented in any well designed system that manages to survive its own evolution.
The rest can basically be summed up as "the web is a mess," and I agree with that. Can it be otherwise? I have often thought the coercion of HTML+CSS into a platform for complex interactive applications has been pretty terrible. Some of the links to other blog posts that supposedly support Mike's argument are actually criticisms of "JavaScript development." Yet JavaScript is just a programming language, like any other; it isn't limited to web development and the flaws it has are being addressed. JavaScript is the baby in the bathwater and there is no reason it shouldn't be a big part of whatever comes next.
Show us something small, powerful, clean, open and that unifies desktop and mobile and maybe you'll get somewhere. I am not one who believes the current paradigm is immortal; if this blog post contributed anything of value I think it is the observation that the web stack has effectively failed mobile; native tool sets work better in almost every way and are indeed the first choice when you need to make high fidelity mobile applications. That shows the limitations of the web stack and provides and opportunity for a competing solution.
It's worth remembering that in the late 1990s, as Microsoft was facing its own security crisis and everybody hated the Wintel monopoly and how bloated Windows had become, several people put out calls for new systems to replace Windows. This was the era of Java applets, of Linux on the desktop, and of cross-platform widget libraries like Qt and WxWidgets.
What we actually got instead was the Web.
And the reason we got the web was because it was never conceived to be an application platform, and Microsoft crushed the only company who was calling it as such (Netscape) and declared victory, and then were caught completely unaware when new challengers like Google and Facebook sprung up and adopted the web for what it was and then totally ate Microsoft's lunch with it. By not looking like an OS, the web was able to differentiate itself in consumers' minds and not force comparisons to a much bigger, more mature platform until it was so entrenched it was impossible to make go away.
If you want to build a replacement for the web today, your first priority should be to think of something that millions of people will use daily. It can (and should!) be really simple initially - the Web was first used for sharing scientific papers, and then for creating WebRings of band fanpages, and then for porn, and it took 20 years or so before full webapps became viable. But thinking about it from the perspective of how you make a secure, performant, maintainable programming environment for developers is exactly the wrong approach. History is littered with projects that do exactly that and fail to get anywhere.
I would add that anyone looking to replace the web should think about what problems the web doesn't solve. I think you could argue that this was what made the web itself successful. Mr. Berners-Lee saw that there was no system which had gained market acceptance that organized information in the way that people actually tend to organize it (which is not hierarchically). As you may have been alluding to, simply coming up with a more elegant, well-architected version of the web has no value to anyone outside of the engineering room.
I agree with most of the author's opinions. I'm old enough to have seen the rise & fall of 90's development.
Developers who have been ingrained in web technologies tend to think of the times before as dark ages. They really weren't. Today simply isn't the absolute best that all things have ever been.
Sure some things were a bit simpler & less flashy then and hardware was more limited, but there were a lot of great ideas that have been forgotten and shoved aside in the excitement for modernization. Best practices & whatnot.
But, what the author forgets, is that this is the state of things. Really inventive ideas exist, sometimes in niches, die, and then get rediscovered and reimplemented in circuitous ways. Asinine artifacts tend to arise in every paradigm shift.
I'd argue that word processing, for example, has barely caught up to the days prior to the graphical user interface. Now, with the web browser & mobile, it hasn't even come close to feature parity yet. It will, eventually, mostly. And it will be exponentially more bloated and complicated than before. Such are things.
the biggest security blunder of the web platform is allowing third party domains to inject code into https pages, which completely violates the trust that https is meant to establish.
and it's here to stay, folks! because the entire trillion-dollar ad industry is built upon it, vacuuming up data about users across the internet.
a huge amount of security and privacy issues would vanish overnight simply by requiring same origin.
the fact that my banking backend has third-party metrics scripts injected [without uMatrix/uBlock Origin] is unforgivable.
and of course half the web is broken without allowing 2 or 3 CDNs or cloudflare to track me everywhere i go.
Absolutely, let the server go to the advertiser and fetch the ad content to push, don't make the client do it. I'd go far as to say that any content coming from a 3rd party - even images - has caused more trouble than it's worth.
I agree and disagree. From the perspective of anyone making a web site it's very easy to secure yourself against JS running on a third party domain: don't load any.
That sites do load these scripts says a lot more about their priorities and the state of online advertising than it does about browsers themselves.
> From the perspective of anyone making a web site it's very easy to secure yourself against JS running on a third party domain: don't load any.
No that's only half the solution, the other (much harder) half is to ensure you have no XSS. The GP's point was if they hadn't allowed cross-origin scripting it would have had big security benefits.
But the problem is that the server is deciding for the user. If they want to show me sketchy ad content, they can go fetch it and send it to me as part of my request to them. Don't tell me to go get it myself.
If an ad server is malicious, let it be the web server that has to deal with them, not me.
So suppose they go and fetch these malicious ads and forward them to you. Now you get that malware directly from the first party. The malware has all the same-origin access as the first-party application, and you can't trivially block it with things like uMatrix.
That's what I call server deciding for the user. And now you're in real trouble with security.
Yes, but then the malware can also compromise the server, since it now has js access to all the users and can Masquerade as them when they see the ad - even as admins. Keys to the kingdom. This is a feature, not a bug - it means the user and the server are now in the same boat, and the server will have some friggin diligence about whose code they run.
Also, means the server has to pay for the damned bandwidth.
the implication of same-origin would affect all requests made from the client. you can serve malicious js from the server all day long but it would be restricted to only talking back to that same server.
i dont expect site authors to give 2 shits about security when the alternative is ad revenue. that's assuming they even understand the security/privacy implications of spending 5 seconds to add that one-liner social sharing widget.
75% of web devs wont bother to consider it and the other 24% wont care.
it's the job of browser vendors to provide saftey for the masses. of course the giant conflic of interest here is that most browser vendors get a cut of the ad revenue.
there's a massive need for a payment platform that allows for browsing ad-free but still paying directly for content as-you-go. i think Brave is trying to do this.
cryptocurrency may provide the privacy protections for this type of arrangement.
There's one big problem with killing the web: Apple's App Store. You can make all the new platforms you want, but they will never be allowed to replace the App Store for distribution.
The web is the only platform that can do distribution outside of the App Store on iOS and Apple will never allow a second one. That means your platform can't have hyperlinks between apps, can't have a no-install experience, can't do just-in-time code delivery. Without those features you can't replace the web.
You misunderstand. If the app isn't installed those hyperlinks fall back to the web and not any other platform. You think it's ok to make a new platform where you're required to install every app ahead of time, or clicking any hyperlink just takes you to the web instead? You can't possibly replace the web that way.
There's no way to add a new hyperlinked platform to iOS that's not the web.
One problem I've had when developing for iOS is that universal links don't "trigger" after a redirect, which is very relevant for sending emails though something like MailChimp with click-tracking.
The irony of complaining that native, here portrayed as an alternative to a hopelessly multi-party web platform, won’t let you interject arbitrary parties between a link and its destination…
Also, web is the only widely used and non-corporate/centralized way of obtaining software on mobile. I realize that I'm not a part of big demographic, but on CopperheadOS, opting out of Google Play, I can get software 1. as APK's (only if devs make them accessible, and they come without auto-updates anyway) 2. by F-Droid (which contains only strict FOSS) 3. as web apps. Web sucks in many ways, but at least it makes more or less really _owning_ a _useful_ (flexible) smartphone a possibility.
But the links don't work unless the app is installed. Maybe they fall back to the web if you're lucky. Completely impossible to replace the web with a platform that requires you to install every app ahead of time before links will work.
> For the first time, a meaningful number of developers are openly questioning the web platform.
Lost me in the opening paragraph. "For the first time"? Please, people have been openly questioning the web platform for a decade now.
Ever since mobile (and their native apps) starting "killing off the desktop".
Ever since people downloaded their first PhoneGap/Cordova app, and saw how badly it looks and behaves compared to native widgets.
Ever since people pulled up a task manager, and noticed how much RAM and CPU that Electron-based app was using.
On the other hand... we've been openly questioning native too, ever since basic social media apps starting weighing a hundred megs each. Every platform has its problems.
> It’s time to kill the web ... I’m going to review the deep, unfixable problems the web platform has: I want to convince you that nuking it from orbit is the only way to go
gee, those statements are bold. Not only JS or even the front-end stack, the author wants to kill the whole web and make a new one.
I can say it is not a first time I've seen an engineer seeing something imperfect and suggesting that everybody should immediately abandon it to make something better from the scratch.
Like many of you, I am looking forward to see the second part for a web alternative. What I am interested in is how the author wants to make his proposal as beginner-friendly as the web already is.
The reason the web platform is "ugly" is because nobody owns it. Multiple players need to get on the same page in order for things to happen. The end result is not pretty (different browser versions behave differently, incomplete specs, etc.), but one thing is constant - nobody owns the platform.
This is important because of things like this [1]. You may dislike some apps' mission, or approach to moderating content, but you cannot outright ban it from your platform, if you don't own the platform.
Apart from some justified points on how confusing origins can be, the author fundamentally misunderstands Web development.
Not only do users want fast sites and multiple of them open, so the performance point bears little weight, but the authors points to OOP techniques, presenting them as necessarily superior to FRP because they came later to Windows.
Next the criticism on productivity and size of developer teams. Productivity has gone up tremendously in my experience e.g. by doing universal apps using React/Webpack/CSS Modules and following FRP principles, all of which you can do while maintaining Web semantics. If you haven't noticed gains in productivity, your workflow is wrong and you aren't taking advantage of the current tools. From my point of view, things used to be much worse and it is finally maturing.
I won't bother commenting on the rest because the article just go down from there in confusion mixing up services with applications. The author just basically wants to write desktop apps, but also be able to take advantage of the Web's discoverability.
> "Next the criticism on productivity and size of developer teams."
Were we reading the same article? The main criticisms made by the author of the article were about web app security. If there was a mention of productivity it was only made in passing.
> "The author just basically wants to write desktop apps, but also be able to take advantage of the Web's discoverability."
While i understand the article and agree with it, i love programming webapps.
The combination of html/CSS is much easier (for me at least) to work with than most other rendering framework, also JavaScript while not perfect is very good for fast moving target, and with "extension" like typescript you can even manage very large application in a progressive way (you can mix js and typescript).
I also work with the android layout system and a bit of the iOs one, and they are a lot more confusing, (Constraint-layout fixed a couple of problem on android recently).
Currently i work manly with legacy asp.net app at work, and shiny all-js webapps at home, but i done a lot of work also with winform and android apps
Also the ability to update the app "on the fly" and be able to download only the part of the application that you need is pretty cool. Your can make your user always use the last version and quickly deploy hotfix.
I understand that this are not propriety desirable for every type of application but sometimes are game-changer.
What i really want to have is a lighter implementation, i think that Facebook did something similar with is Facebook lite app.
Here's a gist with what i really want for a layout/app runtime.
* Using a Url style system for retrieving resource.
* A binary protocol similar to protobuf,capnproto etc.. for talking with the server.
* A clear separation between the template and the data, so that i can cache the entire page and only request the data for populating it.
* A Module and a Permission system, with versioning (for backward and forward compatibly),maybe integrated.
* One way to store data (i personaly like key-value system, but i think a document system would be more suitable).
* A Unified syntax for html and css.
* a Component system(this is a big one), ideally the spec should only define a div-style generic container that can be specialised in a new component by adding to it, a name, style and optionally a script witch control is behaviour, (I'm not a fan of the web-component spec as it is).
What would you want from an alternative layout/runtime system for web-like app?
I'm really curious!
PS: I really hope there will be more engineering post about the Facebook lite app, it seem a concept really cool that could be used for a lot of other apps!
> The combination of html/CSS is much easier (for me at least) to work with than most other rendering framewor
Html+css is not a rendering framework. Also, come back with your "much easier" when you need to do anything even remotely resembling iOS's screen transitions, animations, and capabilities for constrained layout.
Even properly implementing the seemingly simple toolbar in Google Docs is an excercise in endless frustration.
My bad, i didn't express myself, for rendering framework i mean both the language and the technology that actually "execute" them. I understand that different browser do thing different but the concept are the same.
Also i don't know ios very well, but i have implemented complex layout in both android and the html/css/js and i can say that android is MUCH more frustrating, even when we consider browser backward compatibility, especialy on the animation side (css animations were a bit hard at first, but in little time i was able to construct complex animation very quickly).
I actually have builded a toolbar like the one on google docs (which i use daily) for editing data in a timeline for appointment, which had to be compatible with IE8 and while there was a lot of pain, i was able to iterate and experimentate a lot more quickly than android, and it wasn't that bad, also in my experience when we talk about layout and user interface there is A LOT more documentation on html/css than android, and there also are more framework/library that can help you deal with browser diffrence.
This only means that Android managed to create something that's wors than html+css, which is quite an achievement.
Meanwhile, for every small thing that you need on the web you need to reinvent things from scratch. Animations. Lists/virtual lists. Containers. Toolbars. Menus. Keyboard shortcuts. Constrained layouts. Layouts in general. Interactions. Combinations of anything above. Any basic UI component and interaction you can think of is non-existent on the web, and is re-implemented poorly and inconsistently by an infinite number of various UI frameworks.
Flux is not equivalent to Windows Events. The analogy is DOM Events. Also, Flux is not required for building web apps.
After this I thought that it makes no sense to read the article further.
For me the problem with web apps is low performance, slow load time. Another problem is people who try to push programming patterns from functional languages (like immutable values) into mainstream JS libraries. Please use Haskell instead if you love immutable values that much.
Despite the slow web performance I find web apps open much faster than their iOS app equivalent. This is despite the fact that iOS apps have 100-200MB downloaded in advance while the web loads your binaries and assets on the fly.
The 2arguments presented are:
1)Reinventing the 90s - Why is this necessarily a bad thing? The author links to a blog post discussing how Flux is similar to Windows 1.0 but that author of that post does not claim that makes Flux bad. In fact they agree it works well and scales well as well. The only thing they say is that based on the experience in the 90s it’s likely we will have further developments on this model. Why should those developments not be on the web platform itself (also who thinks that if we are gonna start a completely new platform we wouldn’t reinvent the 90s on it again?).
2. Impossible to secure - Yes security is a huge issue with the web. But a large part of that is older designs and many have been mitigated (as an example, the wiki section the author links to show why JSON is insecure has 2insecurities. Security issues in parser implementations which are not written in web technologies anyways and the fact that before 2009 and the widespread availability of JSON.parse and JSON.stringify people used eval to parse JSON).
I’ll be honest that I do think the web needs to be improved/changes/replaced. I don’t think this article makes the point well and possibly focuses on the wrong things. But my biggest concern is with the idea that improvements need to be achieved by replacing the web instead of the kind of incremental improvement we are already seeing.
I would be interested in Part 2 to see if the replacement the author has in mind is really worth it. It would need to at least be an order of magnitude better to sacrifice the compatibility advantages the web has, but it may still be worthwhile to think of what a platform written today from scratch would look like to focus the kind of improvements we would like to see on the web.
Also, my own personal plea: If anyone tries to create a new platform or front-end framework, targeting the browser or otherwise, please don't neglect accessibility for people with disabilities, i.e. with screen readers and the like. At least the Web sort of gets this right.
What about the fact that the Web platform isn't owned by any one company, but is supported (to varying degrees) by every major native platform? A shiny new app platform won't have that. That's important.
On balance, and having read the whole article now, I think you're right. I shouldn't have reacted before reading the whole thing and really considering it.
I apologize in advance for a political analogy but this sounds like "Obamacare is bad so we must repeal and replace it. I'll show you the replacement later."
I've been a webdev for two decades now and while the author highlights the problems correctly, almost all of them have known fixes and there are 'best practices' to avoid them.
Humans are not done with engineering and technology. We're still coming up with better ways to do things. Building for the browser is one of the best things we've done as a civilization. I can build something, send a link to my dad, and he can look it up on his phone with literally a single touch. How is that not amazing? It blows my mind every time I stop to think about it.
It sounds ridiculous to say that since buildings fall every now and then, it's time to kill dwellings or since cars crash frequently, it's time to kill transportation. So without seeing the author's replacement, I am not yet ready to throw away the browser and JS-ecosystem just yet. It's terrible that Authy's 2-factor was bypassed with one simple trick but that doesn't mean HTML/CSS/JS need to die. You could have the same exact issues with mobile apps, installed software, or even hardware devices.
> It sounds ridiculous to say that since buildings fall every now and then, it's time to kill dwellings or since cars crash frequently, it's time to kill transportation.
It's more like, since cars crash frequently, time to replace human drivers with machines.
I take his argument as replacing the web as an application platform with something designed from the ground up for applications.
For me the open web died when EME won. Someone should build an application platform from scratch instead of wasting more time on this crappy document platform with application capabilities.
It's probably not possible to evaluate this post without waiting for part two, where the author says they'll outline their plan for The One True Platform, because after reading part one I come away saying "yeah, so what?".
I don't think many people working on the web platform evangelise it as the most amazing software development platform that's ever existed. But they do recognise the reasons why it has proven to be as popular as it is, and what can be done to improve it. That's why we've got WebAssembly for native code, Service Workers for offline capabilities, WebGL for performant graphics, so on and so forth. Yeah, it's scrappy, but it has much more chance of being successful than some start-from-scratch idealised standard (that will have no security vulnerabilities, naturally) that someone just brewed up.
But hey! Maybe I'm wrong. Maybe part two will blow the web dev world away. But I'm not holding my breath.
The author seems to think that most developers think the Web sucks and needs to be killed. While this may have a sliver of truth behind it —in that a lot of us are dissatisfied with certain aspects of the platform— there's one thing the author fails to address:
If the Web (and Javascript) suck so badly, then what's up with all those Electron desktop apps? And all those react native/nativescript/ionic mobile apps?
IMHO, the author fails to address the only reason the web is popular as an application platform: its still the only reliable way to make sure your code runs everywhere with as little effort as possible.
- Web apps remove the need for installing an application. This alone has multiple positive implications, such as lowering the entry barrier for usage.
- Most of the time you can use them from any operating system.
- Security-wise, locally installed apps are not more secure. A locally running app if anything gives a larger attack surface to the end user.
- The level of security major browsers have is not within an average business budget.
Finally, the most important point:
Security is a 2 way street. Just like you can attack software installed in your computer, the software itself can as well be malicious and attack you. Web browsers provide guarantees on what a web application can do and what they cannot. Without these guarantees, it would be much harder to trust an application. Mobile operating systems try to solve this problem with permissions, and it has been rather effective, but not all people pay attention to them. With desktop apps you are largely on your own.
- The author failed to draw a clear distinction between "The Web" as an application platform and "The Web" as a network of semantic information.
- Digging deeper, "The Web" the application framework is pretty flexible. There are plenty of ways to use hypermedia and HTTP, while using your own non-HTML/CSS UI tooling.
- The article strikes me as ill-researched -- the author writes "Here’s a good blog post on Flux, the latest hot web framework from Facebook". Flux is definitely not the latest from Facebook, and some of the linked articles were from 2015. For better or worse (I think better), front-end is moving really fast, and the web platform roast listicles don't age well.
- The point about "UI Complexity" is just odd. UIs should not be complex. Comparing the windows explorer to Google docs is comparing fruits to vegetables. The point "look! we still have toolbars and shades of grey" has nothing to do with the web and everything to do with UX metaphors and familiar affordances.
- "Things as basic as UI components are a disaster zone". UI "components" are not basic! What is a component? No seriously, ask a programmer content with OO languages, and then ask someone who prefers functional languages. Then ask those developers to agree on an interface.
Though I do agree with:
- Web apps are slow. Painting is really complicated.
- So many apps are written with the assumption that they're always online. The author is right that users have low expectations when it comes to good offline experiences.
- The web wasn't designed with our contemporary single-page application use case in mind.
- JS could obviously be way better.
- The need for backwards compatibility is pretty crippling.
> Where desktop apps have exploit categories like “double free”, “stack smash”, “use after free” etc, web apps fix those but then re-introduce their own very similar mistakes: SQL injection, XSS, XSRF, header injection, MIME confusion, and so on.
All of these are pretty thoroughly solved by picking tools that don’t let you shoot yourself in the foot, and SQL injection isn’t remotely a web app thing. (The others are far less severe than RCE, anyway, as they only affect the one app. I guess the argument here is that the web platform isn’t yet optimal? Work is always continuing to improve security – take Content-Security-Policy, for example, which mitigates every type of exploit mentioned even if you do everything else wrong – and if you think it can’t work out, point out a real alternative.)
> web apps fix those but then re-introduce their own very similar mistakes: SQL injection...
Any system using a SQL database is susceptible to SQL vulnerabilities, web based or not. Not to mention SQL injection is a largely solved problem. Creating concatenated strings for your database to execute is hardly the web's fault.
Screening apps by a large company is not bad. Users don't want to guess whether the app contains malware or not. They would prefer someone do it for them.
Magic Sorcerer Hat Mode: Part 2 is going to detail something like Plan 9, where you don't have to worry about where the disk is, or the information feed comes from; where authorization and authentication are baked into services - not tacked on later; where UI forms are composable, etc etc.
My surmise is that all of these things can be accomplished now with careful choices of how you do things.
And if you aren't Microsoft or Google, maybe you don't need to make Word or Excel on the web?
Even though it's a bit forced, I agree that the whole mainstream computer field as a weird non ROI. Machines 10000x faster, personal value/productivity: flat or below.
It seems like you have to ignore all value derived from networks in order to come to the conclusion that software is no more powerful than in the 90s.
In the 90s I couldn't have met with my team, with members in Moscow, California, Pennsylvania, and Texas, in any reasonable way...today I can chat, including video and sound, on a whim!
Managing source code today is massively more productive than in the 90s. CVS (or, heavens forbid, RCS) on a central server was how it was done back then, if you had revision control, at all. It's not merely a better revision control system (git), it's the web-based infrastructure around it (github/gitlab/web-based CI/whatever). That wouldn't be possible on any platform that's less connected and less widely available than the web.
The rise of package managers is another massive productivity booster that maybe goes unheeded (we all love them, but I think their productivity value is wildly underestimated...how else can you add 100,000 lines of code, that probably works, in a couple of minutes, and reliably allow every member of your team to do the same?). Web technologies have enabled that. There's a reason npm has the largest package selection the world has ever seen, and I think it's the massive interconnectivity of the web platform. (This feels sort of vaguely defined, I guess...but, there is a magic to the web platform.)
There's so many areas where we're more productive today because of the network effects of the web as a platform. Also, because the web is universal, I don't have to use Windows, ever. Everything I ever want to do has a Linux version. Anything that falls short of complete platform independence is probably a step backward, IMHO, even if it has other benefits like smaller/faster binary builds.
Also...WebAssembly is coming. We're going to see a fast/efficient web, long before a new platform could possibly be delivered.
Many of those benefits you listed are linked to the growth of the Internet. Package managers, video conferencing, distributed version control, all mostly Internet-based. The web is just one part of Internet activity. It's the part that requires use of a web browser. If you can do something online without using a web browser it's because of the Internet.
The criticisms that are being levelled at the web are related to it being an inefficient and insecure platform for applications. Note this is not Internet-enabled applications like package managers, but rather applications that run within the web. It might seem like a pedantic distinction, but it's a key one in understanding what's under fire.
I get that and tried to make it clear that I get that, particularly in the paragraph about npm; Perl has the CPAN, Python has had a half dozen package managers (I don't even know which one finally "stuck"), Ruby has gems, but nothing exploded the way npm did. So, what's different? I would argue it's not that JavaScript is a better language (though it's a pretty strong language today). I would argue that it's a better platform, and that platform, ultimately, is the web.
Every package manager works over the internet. Only one represents the web-as-a-platform. And, it turns out that's the one that has dwarfed all others in size and scope, in a quite short time. Nearly everything I mentioned above requires the internet for interconnectivity, sure, but also a platform that delivers it to the user. Any platform for building apps that fails to deliver at least as much as the web will never be as successful as the web.
Edit: Also, desktop apps have had the Internet for decades. What have they done with it?
> "Also, desktop apps have had the Internet for decades. What have they done with it?"
Automatic updates is the answer that springs to mind first. Is there something more they should be doing with the Internet?
The only other answer I could think of was 'social connectivity', but in the world of desktop apps, there's no major downside with splitting out social connectivity into separate apps. I don't care if a desktop 3D CAD application doesn't have chat functionality as there are specialist desktop applications for chatting online. Sharing a 3D model in a realtime online conversation is as simple as sharing a file.
So, this seems strange, to me, and maybe misses the point of the web entirely.
I'm not talking about bolted-on "social" features. There are entire multi-billion dollar industries built on categories of software that did not exist in 1990.
Google Docs isn't merely a word processor (spreadsheet, etc.) with social features...the "chat" is ancillary to the real benefits; it's an entirely different way to work with documents, and that's one of the examples of things that's extremely close to 80s/90s tech; it looks just like a word processor, and people from 1990 would know how to use it. But, it's not the same thing, and before it (and some other online document tools) came along, Word had extremely limited sharing capabilities (requiring ridiculous intranet servers to host the shared docs, and it was basically the same as passing it around via email only with slightly better revision control). That's as close to a traditional app as you can find, and it is still 100% more valuable for being on the web.
What would [ Youtube, facebook, Google Maps, Amazon, craigslist, Netflix, etc. ] look like in a desktop app, and why didn't they exist before they came to the web? The web is a unique (so far) platform with distinct benefits that aren't available to apps in that past. The reverse (desktop/native apps could do things web apps couldn't) was also true until relatively recently, but that's changing, though I don't think the interesting work is in porting native apps to web apps...people will do it, because they can, but the interesting work is in the new things made possible by the web itself.
> "Youtube, facebook, Google Maps, Amazon, craigslist, Netflix"
There's nothing stopping any of those being implemented as desktop apps. In the case of Netflix and Google Maps they already have equivalents on the desktop that are even more capable than their online equivalents, such as Kodi and Google Earth.
As for the collaborative document features of Google Docs, MS Office has this as well. The main benefit of Google Docs is its price.
> In the 90s I couldn't have met with my team, with members in Moscow, California, Pennsylvania, and Texas, in any reasonable way
I remember in the mid '90s that there were a couple of applications that would allow you to do that (netmeeting from Microsoft and cooltalk from Netscape). I don't remember how easy or difficult it was to find other users though.
Sure, I was speaking directly to what I know...but, are distributed docs in Google Docs not more productive than passing around a Word doc with annotations? Is banking, accounting, trading stocks, bill payments, buying and selling nearly any product, not more productive today due to the same forces? We (people in tech) are just at the leading edge of it...but, it impacts everyone.
But, I would argue that the web platform has produced a bigger productivity boost (in terms of output per unit of time) than any other single paradigm shift in computing history. Who cares if typing is a little more sluggish than the native app if you don't have to email the resulting document to everyone on the team and then converge edits at the end of the editing process?
Web properties have invested growing processing power to the immense benefit of their customers, advertisers. Of course users have not reaped productivity gains from something we don't pay for.
Except for that there are a thousand times more computer users than there were 30 years ago, now that we have the computer power to render talking paperclips or whatever to make computers usable by the average person.
Sure, and what's your plan to replace the webapp? Remember, webapps replaced desktop distribution because they:
• Work on all platforms
• Run without installation
• Provide a quick development and update cycle
So whatever you'd like to replace the webapp with has to do at least _some_ of those things better. And unless it rolls out to nearly everyone who already has access to a web browser, you're going to be competing with the imperfect but "good enough" platform.
Further – the author criticizes open standards because they're not perfect. Sure, no one will be implementing the full HTML5 standard from scratch, and there's a lot of waste in what the W3C produces. But what's your proposed alternative? A return to closed, vendor-proprietary UI frameworks and DLL interfaces?
The article doesn't make sense without an alternative. What open standard is being proposed to replace the open WWW?
Mobile apps and app stores are not a replacement for the open Web, and it can't reasonably be argued that locked-down mobile devices loaded with craplets and no root access are better than the WWW.
I agree, this article doesn't make any sense without proposing an alternative. Sure, I think most people would agree that the web is still a bit rough for (large) apps. (compared to desktop apps) But nothing that can't be fixed, right?
I can't imaging that native phone apps will still be popular in 10 years. I think that they will be replaced by the web in a similar fashion as web-apps have replaced desktop apps.
There are still popular desktop apps. Native Office is still used widely despite Google Docs and web versions of Office. Adobe applications are still widely used. Most programmers use desktop editors and IDEs. I use a lot of the Mac applications.
I know it's cynical, but every time I see an "It's time to kill [x]" piece, I assume the author has some competing solution they're trying to hawk. In this case, very much no solutions are offered (but promised in future installments).
The web sucking is a symptom of its success, not an indication of some intrinsic inadequacy. HTML is fault-tolerant. JavaScript is an add-on, not a runtime requirement. CSS degrades gracefully. The web is resilient, forgiving, and accessible. And yeah, a little slow and broken, but so?
Long live web apps. It’s not that they need to be killed, it’s that they are evolving and need/urge to evolve more. With so many OS, HTML works perfectly to distribute one single source-code in all platforms. I see it the other way around, “web apps” are here to stay and evolve.
I just get sad knowing what I'm missing. I've worked with a bunch of desktop GUI builder IDEs (Visual Basic, .Net WinForms, WPF/XAML, and Qt) and I've seen the immense power they have in terms of developer productivity and application performance. Something like XAML is especially interesting because it brings the styling and responsiveness of HTML/CSS to the GUI builder paradigm. I started off with them and then have slowly transitioned to working entirely with web technologies (PHP, Rails, Angular, React, you name it). Not without reluctance for sure! It's a Faustian bargain to me -- trading off overall inferior technology and developer experience for the sheer reach and ease of deployment of the web. It's nuts to me to design a visual thing like a UI by writing lines of code. The GUI builders of yore really nailed this by allowing you to design something visual using a visual modality (drag n drop, realtime layout designers, etc.). I try to explain this folks who've only ever developed for web and usually their eyes glaze over. They can't seem to (or have an incentive not to?) appreciate the impedance mismatches and the fundamental trades being made with web user interfaces.
I agree that the web platform is kind of messed up, but web apps are just so accessible and convenient...
For essential apps, I believe most people would always prefer native versions. They are more convenient that way. (I don't want my local media player to be a tab in Chrome.) People generally are not using Google Docs because they are robust or feature packed. They use them because they could just load it up in a few seconds on a new machine, with nothing to install and everything synced in the cloud.
Actually, I think if there is a platform which allows users to run ANY apps with just one click, it has to be a platform just like the web we have right now. Sure, if JavaScript were not made in a hurry, we could have got a lot of efforts spared - but dialects and attempts to “reimagine” and "personalize" our weapons are still going to show up, maybe just like all those frameworks and workflows we have right now. (Seriously, why are there so many NATIVE UI libraries? So many OS's? So many NATIVE programming languages?)
Yes, we ARE reinventing the wheels, but for a good reason - accessibility. All apps from every generation do similar things: typing docs, filling in spreadsheets, instant messaging, playing music... In fact, humans ALWAYS have done similar things - they wrote stuff and kept lists long before MS Office came along. The web is an upgrade, thanks to the better computing power we have to allow "inefficient" non-native rendering nowadays. The "native" apps we have now can do their fancy new 2017 stuff. Maybe soon we will have full blown AutoCAD as a web version. Many native apps we have today are almost awesome enough - the natural tendency would be to make them more accessible.
> thesis [...] not talking about literally all web apps.
I think all my problems with this article are summed up in that paragraph. It's not so much that there're problems (there're of course) but it's just not the problems the author likes and thus goes into this exaggerating rhetoric.
Safe to say, I remain unconvinced about needing to kill the web and I'm sure whatever this guy suggests in the second part will have its own share of problems, probably even the same ones. Because, as it turns out, the web isn't that special in that area; every platform has problems, some identical.
Also, his description of the state of the art in the 90s suggests that the guy isn't that familiar with desktop development today (kind of surprising, given what the author has worked on). Safe to say, efficient binaries that run on just a few MBs of RAM is a) not what happens today or b) hasn't changed much in the last 20 years. Depending on what application we're talking about.
Also when did the web lose on mobile and when did developers "near universally" choose to write native mobile applications?
Excellent article. It's time to scream aloud that the emperor has no clothes. This farce has gone too far:
Most of the facilites for implementing a web app started as a quick and dirty hack, creating abuse of HTTP Forms, DOM manipulation, etc. Aided by Javascript, itself a hack (the creator was under heavy time pressure to deliver a language.)
We've built a whole empire using these flawed pieces.
I'm working on one possible solution for application developers who don't really have much experience working with the complicated pipelines involved in building modern web applications.
It also makes it easier to support users without javascript enabled.
I'm calling it a Web Application Scripting Language, basically it's a template language to build interactive client-side applications that can also be rendered server-side (with actual data) without a javascript interpreter on the server.
I just pushed an updated documentation site which includes a mostly-functional TodoMVC demo.
The scripting language aspect of it is similar in some ways to elm, but does not expect a developer to be familiar with topics such as type theory and monads.
This version uses Redux and IncrementalDOM, but the actual functionality it's using in those libraries could easily be replaced with something smaller and more focused on the rehydrated HTML use case.
The web became as popular as it did precisely because it sucks. A lot of the supporting technologies made design decisions that favored easy of use over stability and things that experienced developers like. Every argument I hear about how the web sucks basically boils down to an argument that it should be more consistent and well-designed. But most well-designed technologies fail because engineering doesn't win the day -- delivery does. I say all this even though I consider myself an engineer with a penchant for the craft of writing software. I love beautifully architected systems, but I have to sadly admit that they're often not relevant to the bottom line.
I don't think it's time now or in the near future for the web to die. Just as we often still mindlessly adhere to the 80 character limit for terminal width, even though this limit has its origins in the size of punch cards, we'll still be using traditional web stack technologies decades from now.
I wished the author had taken a more developmental view.
Personally, I think the web as it currently is, has a few shortcomings, but on the whole I feel it can be refined into something quite brilliant.
Also, with regards to the title "it's time to kill the web app", I feel the web app has only just started to emerge as a solid competitor to native. To kill it now would be a travesty! I believe that soon it will replace a large chunk of native apps and the innovation we will see in browser APIs in the next few years will be quite remarkable. I think the USP for the web is ubiquity and uniformity - having a single, uniform platform that runs on any machine while being unlimited in the variety and nature of things it's capable of. There are issues to be addressed (security being the obvious one), but still, to me web apps are a step in the right direction, not the wrong one!!
annoying clickbait title and irrelevant trolltastic comparison to '90s windows garbage designed to incite anger.
1) yes, large webapps are hard to secure. but they're also infinitely easier to patch.
2) yes, the fact that google just shits out random things so people can get promotions (SPDY, NaCL, whatever) and it becomes a thing is a problem. this was not how the decentralized web was designed... but that doesn't mean that it's "time to kill the web."
Yeah, I have been working on the web for 10 years. When I started, it was fun, because I knew nothing, and that seemed magical to me, but the more I work on this platform, the more I realize it's crap.
We are now at peak crap. Did you see all this JavaScript bullshit code needed nowadays to just render a fucking web page and fetch some data?
I hope OOP would die too, I mean, it's often an over-engineered bloat that only works for trivial Programming 101 courses (using Bike, Vehicle classes). In real world, I found OOP to make things messy with pseudo-objects like Service, Manager, (Abstract?!)Factory and so. Just using params and functions feels more natural I think.
Sorry for the rant. I think we all do an unbelievable job pushing these tools to their limits, but it's just made me sad that we may building on ugly foundations.
From an engineering standpoint there are surely some valid points here but I have to say that I don't think things are all that bad.
HTML,CSS, and Javascript separate the layers of a web app fairly nicely.
It's all free for the learning and using and even distributing, and it comes with a huge community to lean on for support where many, if not most, of any questions you might have are already answered.
Those parts work pretty well and have a huge user and developer base. You don't just toss that out and tell everyone they need to start over. To even imagine that you first have to ignore the real value of it, which is truly immense. So much so that in reality you cannot ignore it so whatever you do has to be compatible with it, or at least accommodate it.
I have to admit this leaves me curious about what "Part 2" will offer.
> HTML,CSS, and Javascript separate the layers of a web app fairly nicely.
Except they don't. Most devs aren't even aware of what divs they are using for pure styling vs semantically correct ones. They think all the html they write is by definition semantic.
Then there are the less frequent but even more insidious cases where css is used for content.
Seems to me that's a feature that's generally described as "more than one way to do it" and the goal is getting the app working and shipped not to appease some outside critic's sense of semantics.
Styling separable from content is the direction W3 wants to go. So being aware and adhering as much as possible should improve performance and long-term maintenance burden.
I agree the tradeoff probably isn't worth the extra time needed to carefully structure your html and do css acrobatics.
I just get triggered when someone claims the holy trinity idea works good with html/css/js. It doesn't, but that's OK.
I agree with the premise but the problem is developing apps in any other ecosystem decreases your overall audience. You're either in the corporate world with their walled gardens, or in the FLOSS world where there's all sort of rough edges and things don't work as smoothly (and I say that with nothing but respect and admiration for all the well-meaning hard work from folks that's gone into both ecosystems, but it's the stark reality of matter).
The web is as close as we've gotten to making tech as user friendly and accessible. I bet Mr. Hearn has a bunch of technical proposals lined up for part 2, but how we cross all the factionalism, corporate or idealogical, that's formed in the software community at large since the birth of the web?
If you're trying to prove that a platform is unfixable and present as evidence the assertion that it's reinventing things from a previous era, you're doing it wrong. Even assuming it's true, reinventing in no way makes the platform unfixable. Quite the opposite; you're asserting it's being fixed.
If you're trying to prove that a platform can't be made secure and present as evidence security issues that have been made into non-issues (SQL injection, XSSI) you're doing it wrong.
If you're arguing that it's time to start over from scratch don't criticize things that could be fixed without starting over from scratch, e.g. lack of a binary RPC format.
It's really not. HTML and CSS are far from ideal for making applications. JS has gotten better, but it still lacking in some ways. And there is no IDE for the web, whereas Smalltalk had one in the 70s, and numerous ones have existed for other platforms since then.
It's an IDE for a programming language, which is only part of the picture. Compare it to what the Flash IDE or Visual Basic offered developers. Or Smalltalk in terms of a fully customizable, live environment.
Maybe something that's a cross between Developer Tools, Flash Designer and the Smalltalk environment.
Anyway, VS Code isn't the development environment for the web. It's just one of many options for writing Javascript.
Yeah, I've tried it. I've also used Adobe tools and Visual Basic before, and some web layout/theme builders. So it's the IDE plus the visual building environment, with tools for animation and what not.
I also use Jupyter Notebooks, and having a rich REPL is great for prototyping and exploration.
So maybe something like the Smalltalk environment, where the environment is a fully customizeable web browser, and the language is your choice, which would get compiled to WASM or JS.
Obviously for other reasons, like the fact that everyone has a browser for free on their device, and everyone knows how to use Google. There are very strong network effects in favor of the web.
It is a great platform in some ways. But it's not ideal for creating applications.
The live environment is the page. You can tweak any CSS value and have it update live, or save to a file. You can run breakpoints to examine code execution, examine call stacks and use the console for input and output.
It's really no different from developing in IntelliJ or similar. The only difference is you don't need to hit "compile" first.
At my last job, the app we were creating had both a web frontend and a Qt ui, so it was pretty easy to compare the relative difficulty of the two approaches since we were generally doing the same thing in both. Honestly it was kind of a wash. Qt was a little nicer as a developer because it meant never leaving visual studio, but I can't say that Qt offered some great advantage over HTML/CSS in terms of UI paradigm. I'm skeptical of this premise in that, yeah, the web isn't that great a development platform, but the alternatives aren't significantly better overall.
During the last 20 years I used to prefer web apps over native (C++, Java, C#) for banking and trading applications. (although most developers didn't like to fight browser incompatibilities).
2 years ago I changed my opinion. We've developed two different user interfaces for trading at the same time. One with HTML5, Typescript, Angular, WebSockets, and the other one with JavaFX. The developement of the JavaFX based application was way cheaper, faster, pleased users more (due to multi-window!) and had by an order of magnitude fewer glitches in the UI. (But the web application was prettier).
If, as the article (and much of this thread) seems to suggest, the problem we are trying to solve is
1) Serving a GUI application to multiple users in a way that they can trust
2) Maintaining up to date versions without client side updates
3) Storage of data on a remote server over the internet, enabling saas etc.
4) Easier GUI development using an IDE.
This points to needing something more akin to Citrix/RDP/Terminal services. Run full blown GUI apps on the server and serve an image of them over the network. This needn't be as bloated as the MS implementation, but seems to solve those issues above
He's spot-on with a lot of his analysis, including security around REST/JSON. A few weeks back, I made similar, though less-detailed comments on another HN-thread, particularly in a SPA context.
The responses I received were essentially "it's not a problem if you do it right". But, of course it's inherently less secure when you now have data flying off of the server to be rendered on the client vs. consuming it all on the server and rendering the view there. You're kind of doubly-exposed.
It's not that there are no techniques for attempting to secure it. It's that it adds more complexity and that it's easier to leak data to your client (or an unsanctioned client) without realizing it. Because, of course a REST endpoint just sitting there on the open Web, intended to serve up raw data to an app is less secure than an app that holds on to its data and serves up text/html requests. So, with a SPA app, you'll find yourself doing a lot of things twice (client and server side), and that includes security.
The arcaneness of the techniques for securing all of this that he mentions is also accurate. It just amplifies the problem. The Web was not designed to be an application platform, let alone a secure one. It's hard to ignore this fact in any earnest discussion.
The "unfixable design flaws in the web platform itself" that enable HEIST attack are trivially fixed by disabling third-party cookies, on the client side — in the browser's settings, and on the server side — by using SameSite tag in the Set-Cookie header.
I'm just left wondering why browser vendors don't apply this behavior by default, it would've been much cheaper to fix the broken sites than mitigating the security hell that 3rd-party cookies provide.
You would think text documents work wonderfully but weirdest parts of html [to me] are...
1) the rendering of things outside the viewport. The way wrapping of divs and lines works it seems impossible to make it render the way pdf did from day 1(?) I don't even know if there is a maximum length for pdf but in html you really should try to stay under 20 000 lines. I know, it seems like I'm splitting hairs here, but before the early 90's you could easily scroll though the entire memory of the system as if it was a single document. The amount of code was kind of a lot smaller than the infinite scroll web page requesting pre-cut chunks of xml or json where you get to manually measure the size of elements then get to do crazy calculations with the scroll offset if a new element needs to be inserted above the stuff the user is looking at.
And 2) not having a nice way of doing a reference section summarizing the stuff linked in a text kind of ruins the joy of having links in the text. One ends up building a kind of disposable experience if the text is long enough.
Combined, it's like having a pile of pages that are all the entry point. This is lovely for short reads but far from what books use to be.
I suppose the ultimate creation is one such article that promises everyone a fire breathing pony with laser eyes in the soon to follow up.
When you write one, be sure to rage at the thing everyone loves and to suggest abolishing it - to be replaced by that pony you will get in the next episode!
I for one am all ready to be disappointed by this holy grail of subscribe baiting.
> The fix: All buffers should be length prefixed from database, to frontend server, to user interface.
If you think that that is a solution to anything, you must be living in a universe where ASN.1 implementations have not ever had bugs, in particular they must never have had any vulnerabilities.
It's certainly not the universe that I live in.
In that same universe, packet sniffers/protocol disectors probably also never had any vulnerabilities due to blindly trusting length values?
If we're going to kill the web, can we start by forming design principles instead of retrofitting patches on the existing design?
For instance, a great principle would be to minimize shared knowledge. Having a giant pre-shared base (i.e. browser) not only restricts what I can build and how, but also stifles evolution and innovation, because the whole world has to be upgraded to the same ginormous 'standard' for us to talk to each other.
The Web is a failure of cheap tool distribution by attention seekers distracting possibly greater talents wasted on short lived skills or failed projects.
We mostly lack use-case qualification, support metrics, documentary weight or programmatic contract between tool makers and users. None should take commercial interest in tools without references, tutorials, cookbooks, i18n support and road maps shared between stakeholders.
The ecosystem of Unix and Windows literature was exceedingly well written and stable. Backward compatibility was important or breaking changes well flagged. Real software for profitable operations (not entertainment) requires stakeholder analyses for costs, adaption cycles, business models, risk and so forth (no screens or code yet) taking weeks or months even in "move fast startup" mode. Chasing tooling and testing against API changes in a code dump is unwise.
Sadly pop promotional blogs make breathless lottery winning with "bailing wire and chewing gum" seem desirable or probable. That's nonsense. We never need hear more "dorm room miracle" stories or "DB2 rewritten in Forth" fantasies. Anything rushed is a bad bet for talents wasted on shifting sands. The use-cases absolutely matter for tool applicability in a multi-stakeholder (contract obligations) profitable operation. Everything else is a distraction.
I suspect voices of endless tool fetish, "exploratory programming," consumer gaming and "content scripting" might disagree. They do not matter. Those people suffer special needs. The Web as a tool platform has mostly failed. The most "convenient" tools for people measuring fake productivity in keystrokes have failed the hardest. [Edit: typos]
I deleted Facebook from my iphone because i don’t want to give them that much access, and now only use their web app on my iphone. And it’s horrible. If Facebook can’t get it right, who can?
My latest example is from 30 minutes ago, where I tried uploading s video; and almost gave up after 5 mins. The UI was hard to understand, extremely slow and never let me understand what was happening.
Professionally I’m an iOS lead on a dual platform (iOS @ Android) app that has hundreds of thousands of users. I’ve occasionally wondered about whether we should switch to React Bative and using Web Facebook always cures me of that idea.
And the fact that i’ve spent hundreds of hours optimizing our app and every view to ensure they launch and open as quickly as possible, down to managing every bit of memory use as ruthlessly as possible. And letting our users use our app offline. Every second costs us users and dollars.
Then there are new technologies like ARKit. Native is still the best way to go if you can afford the time and people to do it the best way possible.
2) a secure sandbox in which to run untrustworthy code
3) distribution without gatekeepers
Until then every other platform is playing catch up.
And maybe in the meantime ask yourself why is the web so popular if it's so bad? Is it just complete stupidity, or is there maybe some form of natural selection happening and you're not understanding the fitness function?
"...unless you work at Google or Microsoft you can’t meaningfully impact the technical direction of the web"
I think this is a great argument for why we need a (for lack of a better name) "meta-browser". An application on the user's machine that contains and runs browsers. Then flip the control to the developer. If I'm only going to design for [name of obscure but super secure browser], my success doesn't have to be dictated by the fact that 99.99% of users didn't originally open my browser of choice. If they come across a page only supported by this little-known browser, they are prompted that they can install it, or they can decide to move on to the next website if the developers didn't write any fallback.
This doesn't just ensure the web can remain open, but makes the whole architecture (the web itself) an open question and allows all aspects of "the web" to evolve more smoothly.
This reads like an article about web applications written by somebody who doesn't have any meaningful amount of experience writing web applications (which I've just seen he admits further down in the comments). The number of false statements and false assumptions in just the first section is enough to make it hard for me to continue reading, since it's supposedly the foundation for the rest of the suggestions.
> Web apps can't use real sockets.
Because security is an important consideration, and websockets are designed to live within the constraints of the same-origin policy, which helps immensely in creating apps that are secure by default.
> Things as basic as UI components are a disaster zone. [Links to article about web components]
Web components are a failed/failing/doomed (IMO) proposed standard and are nothing but an implementation of the idea of UI components. The design of web components is about building component object hierarchies, and is doomed to fail (IMO) in a markup language built around content composition. Saying UI components are a disaster because (one guy says, though I agree) web components are a disaster is like saying "a square won't fit here, so obviously no rectangles will fit."
> HTML5 has peer to peer video streaming
No, it doesn't. Browsers support the completely-separate WebRTC specification and its related javascript APIs. No HTML spec says anything about WebRTC.
My suggestion would be that if you want to replace something, you need to actually grok it first, or at least have a sufficient understanding of the complexities (and the reasons they exist) you're trying to argue against. Otherwise it's way too easy to point out that you don't really know what you're asking to replace, and your opinions, while potentially valid, are going to be tossed with the rest of the bath water.
> Web components are a failed/failing/doomed (IMO) proposed standard and have nothing to do with the common UI component frameworks that exist today. The design of web components is about building component object hierarchies, and is doomed to fail (IMO) in a markup language that excels at composition.
1. Exactly
2. Instead of building a set of native (to the platform) common UI elements w3c ended up ckreating an incomplete low-level API for something, no one knows what exactly.
3. Existing UI frameworks re-invent a huge amount of things, poorly and inconsistently.
This article addresses web apps at the micro level. Comparing a React based webapp to a 20-year old UI, the end result. It fails to address distribution. How did that UI end op on your display? Through 13 floppy disks. How did the React based web app end up on your display? In the blink of an eye. Distribution is key. Real artists ship.
... I feel like I've read the same about terminal apps, tons of programming languages, etc. and it's again missing the point.
Nothing is reinventing anything. The code you write these days in Go, JavaScript whatever is still following the same principles as in the 1980s. All we do is swap out tools and languages, add comfort.
I agree that the web is kind of a catastrophe. However, the problem is not really that it confused documents and apps. I would actually like to see this distinction get blurrier, in the form of better support for active documents, ranging from interactive illustrations of ideas to spreadsheets with formulas.
>This is why the web lost on mobile: when presented with competing platforms that were actually designed instead of organically grown, developers almost universally chose to go native.
almost universally?
mobile dev has enough inefficiencies and complications that, for many use cases, quite a few people today wonder if native is preferable to mobile web, especially PWAs.
Some of the assertions from the author about how things were in the past are pretty off. Office 2000 wasn't happy with 75 MHz and 32 MB ram at all. I would say the average computer at that time was at least 200MHz with 128MB of RAM.
In addition in 1995, developer "platforms" were rarely Windows-based. Borland was still hugely popular at that time, and DOS-based compilers were still big. The assertions he makes about the developer platforms are a complete joke: "Support for graphing of data, theming, 3D graphics" were completely not a thing, nor was "Sophisticated support for multi-language software components".
I'm pretty sure the author didn't develop back in 1995.
>But Office 2000 was happy with a 75 Mhz CPU and 32mb of RAM, whereas the Google Docs shown above is using a 2.5Ghz CPU and almost exactly 10x more RAM.
The pixel count of that win 98 screenshot could probably fit in the top left corner of the menu in google docs.
Just my opinion, but developing for the web is much easier than building native mobile applications. This is a good thing from my point of view. Native development has some catching up to do in this respect.
> All this adds up. I feel a lot more productive when I’m writing desktop apps (even including the various “taxes” you are expected to pay, like making icons for your file types).
That made me remember something: compatibility. I like web apps because they will run on my Linux machine and on all other people's Windows machines. What is your solution to that, angry guy from the article?
Also, please provide solutions to:
* the dangers of installing other people's software to your computer -- dangers that practically inexist in web apps;
* the friction of getting people to try out your apps.
I'm surprised no one has mentioned same-site cookies. This would help a lot with privacy and mostly solve BREACH, CRIME, HEIST, CSRF, mime-confusion, JSON issues, etc.
It wouldn't solve XSS or SQL injection, but native apps are just as vulnerable to SQL injection as html5 apps, so I'm not sure why the author brings that up.
I'm curious what the author will propose as the solution.
> Hence my conclusion: if you can’t hire web devs that understand how to write secure web apps then writing secure web apps is impossible.
Furthermore, the web devs that understand how hard it is to write secure web apps generally don't want to do it for a living.
I've found that the most straightforward way of avoiding problems on the web is to sidestep them as much as possible. Draw UI with canvas or WebGL, overlay native widgets, use RPC over a WebSocket for everything except for public assets.
When you've got things like new multi threaded renderers, all browsers now 64bit, web assembly, service workers, faster processors etc. It seems that in actuality app development with web technologies is finally practical.
But, why are these things framed as an either or? Surely the best thing is always to use the best technology for the job and in many cases the speed and security trade off is worth it in order to utilise existing assets and expertise from a businesses web application.
The biggest advantage for a hobbist developer like me is that web development makes a single code base possible. One code base runs on phones, tablets, desktop, across OSs etc. It would be impossible for me to write and distribute solutions otherwise. Multiple programming languages, deployment overhead across app stores etc.
I miss the provision of notifications the most. With PWA I can cover Android, but I don't think anyone allows notifications from websites (I don't).
The fact that this article is written and distributed with medium (web app) invalidates everything the author says.
It's working fine, you are using it, and millions of other people are happily using it. Web is great for users because they can do arbitrary operations with just a browser without needing to install anything.
I don't know about you, but I build web apps to help users, so it's okay for me to suffer a little pain to make users happy as a developer.
Security is probably not the exact same thing as it was on desktops.. However anyone who's been burned by a Javascript miner (like on the pirate Bay) or whose data is mined for ad revenue knows that the web exposes you to a series of lesser assaults which we shrug off typically.
Just because breaches in privacy and security are not as flashy as waking up to a computer which won't boot doesn't mean web apps are secure.
We definitely don’t have a replacement for the web on the presentation layer. Likewise, we’re stuck with web services for now for the display services sitting immediately behind the frontend. But for pure backend services, there are plenty of alternatives to HTTP that are ready to use today and more sane. There just isn’t a ratified “standard” yet (outside of large tech companies, which tend to have their own).
> The fix: All buffers should be length prefixed from database
How does this fix anything? When you compose length prefixed data (in e.g. arrays, nested stucts) you still have to check if these length and offset fields are coherent. If these length fields get passed over the wire from the user, you still can't trust their values.
Binary protocols don't solve this problem, they just make doing the validation less CPU hungry.
This is a WEB APP https://3d.delavega.us using 3js. It can run on most iOS and Android smartphones, most Windows and macOS machines and Linux computers.
It is likely to run on over a billion devices.
It should just take a few hours to a few days to code depending on your coding skill level. Did I mention no installation required.
Can a non web app or native app be better than this?
I wonder if there's a market for a WebApp store. Like a curated WebApp library - centralized billing, some form of vetting before apps can be listed, maybe even some apis for notifications or whatever. WebApps tested on a few major browsers and platforms before they're allowed to be listed. I'd probably be more willing to pay for WebApps if they were delivered that way.
Web app may not be as good as you wish, but I think GUI app development either on desktop or mobile is definitely worse. Otherwise there won't be a market for hybrid apps. For those desktop app guys who haven't heard of vue, spend some time going through its guide, you will find web development is actually much more advanced than anything you are using.
I almost signed up for that site, only so I could respond to that article. Why kill the platform that can be accessed from devices available to everyone, everywhere. Cross platform support on applications is taxing, even with an engine like unity that offers multiplatform builds, if you want to get a product in the hands of the world, you build it on the web.
There is only ONE major thing that's bad with web apps: you have to trust the server. Which makes them unusable for truly secure applications like bitcoin clients. Because you can always say you didn't authorize an action!
I think the future is IPFS and other content-addressable protocols. Why aren't browsers adding them to the web alongside https?
The web is our best bet to converge to a single platform.
I find it extremely intellectually unfulfilling to write my code for three platforms (web/Android/iOS) instead of just one. So until there is a better common denominator, the web is my first choice, because at least it is accessible from all three platforms, and from the desktop.
Agreed. This is why I'm retraining myself on C# in 2017. Native is better and the web should remain a primarily lean and fast text document delivery system. I'm loving C#/.Net so far, It's almost as good as Java serverside but has the (well-supported) ability to natively produce iOS and Windows apps (the two most important platforms IMO). One stop shop, and it's output is native code. It'll also support wasm when that's a widely used thing as well. Having to learn one platform like Java or C# and work out the rest of my career is a bonus too. I've had it with fashionable technology one year to the next and I certainly don't want to maintain these leaning Tower of Pisa stacks. Not to mention, on a similar chord as the author stated, we learned a lot of lessons over the decades. Part of that lesson is industrial-strength tooling (IDEs) and languages (typed) are important.
C#, Java or discard IMO for most greenfield projects. At least if I'm to maintain it longterm. Others may differ and that's fine, but at least all of this is my philosophy at this point with everything that I've seen and experienced. I'm really looking forward to his part two.
Taking a step back to look at the big picture, does the current evolution of js, html and server technology seem to be headed somewhere great? Seems to me we will soon need large AI stacks just to assist in reading these jumbled stacks of code.
I would have thought software would evolve toward simplicity by now.
One advantage of the web is that you can browse, discover, and test new apps directly inside your browser without needing to install anything on your system. Image you had to install a software any time you visit a new interactive website or check out a new webapp.
I'm curious to see the second part of the article!
First of all, there is nothing called web app, this are called websites, that makes sense that the author is more likely to like app's for everything.
Yet web has no alternatives for what it supposed to do, so lets not try to fix it, as it is not broken.
and possibilities of web technologies are limitless.
On the contrary, from the users pov the vast majority of mobile apps are inferior to a webapp. Downloading apps instead of pointing a browser to an address, while giving all kinds of control (permissions and access to the device itself) to app publishers is just silly.
> "Buffers that don’t specify their length"
This is a good thing. You just need proper escaping. To parse data with length field you need higher-level automata (maybe even Turing-complete parser) than for delimited or quoted/brace-enclosed data.
What if Facebook provided a way for React Native apps to be compiled to Windows and Mac desktop applications? If such apps were lightweight and could be automatically updated, would React Native development be a serious alternative to web app development?
Why web Apps? Because you can hit just about every platform with one codebase, you got instant distribution, you got near universal compatibility. And, If you are inclined, you have minimal licensing.
Desktop has big compatibility cross/platform hurdles (does it run on Linux, MacOS, Windows, Android, iOS, and Playstation?), difficult distribution, and at times licensing (even moreso on walled-garden platforms).
You are also dependent on the desktop/PC manufacturer (ever have Microsoft, Apple, Google, etc pull the rug from under your product? it happens regularly especially with OS updates and "trusted" platform initiatives) Not to mention the development platform whims ("Sorry guys, we are selling to MS. We're sure they'll keep those MacOS and Linux versions up to-date, they are really excited about it!")
Maybe if its a open platform with high adoption on the scale of LibreOffice or GIMP... maybe.
Ideally browsers should block cross-domain requests by default (so no XSRF is possible), but sadly this would break compatibility with older sites. Maybe we should make new HTTP methods (like SAFEPOST) with builtin XSRF protection and switch new apps to them?
I'm showing my age here, but I think Sun had the right objective with Java Web Start. Unfortunately their implementation was awful - bloated, slow, ugly, with a complicated API and poor security.
I think there is still an opportunity to do it right.
I was about to say, it's time to kill monolithic native apps that live behind DRM'ed App Stores, can't talk to one another effectively, can't be composed, must be explicitly managed by users lest they run out of storage, and have an enormous transactional cost to trying them out.
Or I could point out that explicitly designed application protocols with native clients over the years have also shown themselves vulnerable to attack (e.g. IMAP, SMTP, etc), or that most of the attacks on the Web have not been XSS/XSRF but server-side hacks. Or that Android's native app platform is full of malware and viruses that even Google hasn't been able to completely eliminate with deep scanning.
Is the price of security that we throw away the Web and HTTP and implement everything as silo'ed IOS style monolithic apps? It's a price too high to pay in my opinion.
Personally, I vastly prefer the iOS ecosystem precisely because apps are silo'd and sandboxed. I have never had to worry about whether I can store data securely on my phone, never had to worry whether any particular app I've installed is going to do something nefarious to the underlying filesystem or to other apps, never had to bog down my device with (largely bullshit) anti-virus / anti-malware apps or anything like that.
iOS apps Just Work™, and incidentally the "transactional cost" of just tapping a button once to install any particular app hardly seems onerous to me.
And the fact remains, as is always the case with these debates, that the uncountably vast majority of users could not care less about the underlying technology that lets them play Candy Crush. Nobody is ever going to build a mass-market consumer product targeted specifically at people like you or me regardless of what preferences we have. All users want to know is that their device works the way they expect it to work and is as easy as possible to manipulate.
Having said that, I'd be interested to hear you expand further on what price you think you're actually paying and why you consider it to be too high. What is it about the iOS model that's holding you back so much, and how do those drawbacks outweigh the benefits? In concrete, real-world terms, what is your use case?
Web surfing is called "surfing" for a reason, like channel surfing, you never have to worry about the cost of an install, there is very little barriers to moving between apps. You flip effortlessly between sites and simply hitting the back button or closing the tab ends the experience. When Google serves up a surf result, there's never any fear that a click is going to require work in the future.
That is not the case on a mobile native app. Installing an app is a promise for you to perform janitorial work in the future cleaning it up. It creates needless shitwork that the browser cache model does not impose on the user. Native apps take up permanent screen real estate and storage. I know plenty of people who eventually run out of space and then have to go on a spring cleaning adventure to delete unused stuff.
Do you have a 16gb, 32gb, or 128gb phone? You probably know and care when you buy one. But you probably don't know how big your browser cache is nor do you care (or need to).
Likewise, if you switch devices, in a browser, you do nothing. Quite literally, you can drop your Chromebook in a river, open up another, and proceed almost instantly. Does iOS backups and Handoff/Continuity deliver the same experience? No.
Web apps are ephemeral by nature. They can be cached, but they don't need permanent installation to work.
Web apps can be composed easily, because links are relatively transparent and as a web programmer you actually have to do extra work to hide links from people (and Web 1.0 made it impossible). This means connecting one app into another is a benefit, not a hindrance, and the Web was somewhat self documenting of its integration points. Deep linking in native apps is no where near advanced, and navigating between native apps with deep links is very clumsy.
Web apps also tend to support encapsulation better. It's far easier to embed a third party resource, an image asset, a gadget, a banner, than it is in any native app model. Look at the way people embed status indicators on GitHub pages, now try to find the equivalent in ANY native UI that doesn't work by just embedding a Web view.
Let me give you an example of a world simply not possible in iOS. Let's call it Augmented World. In the future, I walk around with phone, or special goggles. When I turn on the camera, any physical object in my world, any location, can have code associated with it. Perhaps I walk up to a vending machine, and the mere act of observing it, presents with me with a 3D menu overlay that lets me order or pay. Or perhaps I walk into a restaurant or cafe, and when I look at the tablet, I can see an interactive menu, perhaps even an NPC interactive hostess.
iOS today is a HUGE barrier to location based commerce. Do I want to install a new app everytime I work into a Chipotles, Five Guys, StarBucks? Why do I need one app per white-label store? And do you realize how this just doesn't scale if mobile commerce progressed to the point where every brick and mortar has a separate app? China solved this by just not using iOS, and simply using QR codes, WeChat, and Javascript/HTML embedding.
If you want any kind of ecosystem that can scale to single-use non-repetitive experiences, like the small shop, it cannot be a system which effectively bans dynamic code loading and execution.
iOS is nice for games and for certain high performance productivity apps, the same way I still run native apps on Windows for say, Overwatch, or Adobe Premiere. But native apps are overkill for something you only will use once. Android Instant is the closest thing approaching the Web for mobile that can handle these kinds of experiences.
If you want the kind of cyberspace envisioned 20 years ago in sci-fi novels, of exploring a vast, network, and teleporting into experiences effortlessly, it ain't the App Store you see in Neuromancer, or in Vinge novels.
How about using an app:// protocol for compiled apps with security in mind, meaning sandboxed and no filesystem access? Http is for hyper TEXT, not hyper BYTES.
Wasm is a great opportunity to reinvent web apps. Please don't fuck it up.
Only the giant companies have the power to recreate the web. If they do it will be first and foremost better for their businesses.
I don't even want to think about the influence of the secret services of the world.
The author is not even mentioning the reason Web apps become prevalent, even in B2B and Enterprise: deployment. Not having to deal with upgrading hundreds of workstations to the latest release.
The fact that I couldn't read this because it redirected me to a Medium web app that I wasn't logged into told me everything I needed to know about the article.
My understanding of HEIST is that it is completely defeated by disabling compression on dynamic content. That doesn't seem to be "the end of the web" to me...
silly author - sandboxed binary apps won't solve the problem. your whinges about client-server communication are inherent to, you know, clients and servers. whether the thing on the client side is real app or a web app is irrelevant. what matters is the communication. imagine if that channel could only carry keyboard/pointer input events to the server and static images in return. yep, that would be pretty robust.
There is a section heading that reads, “Why the web must die.” I almost stopped reading there. I value the web for its longevity, accessibility, and non-proprietary nature.
There are some goods points made in this article, however:
> My experience has been that attempting to hire a web developer that has even heard of all [the above-mentioned security] landmines always ends in failure, let alone hiring one who can reliably avoid them. Hence my conclusion: if you can’t hire web devs that understand how to write secure web apps then writing secure web apps is impossible.
i've been talking about this for ages, but i'm glad people agree with me now. the web is a really shitty application platform. let's do something about it, eh?
Nothing you’ve wrote here is surprising to me. I’ve never found the web app drumbeat to be compelling. The insistence that it’s the be all end all is myopic and usually the domain of those who are trying to protect turf rather than create great experiences that are secure, powerful, pleasing and fantastically useful.
Thank you for stepping up as a full stack developer and presenting a case without obvious bias in favor of the one web to rule them all. I appreciate it.
John-Michael Scott - a guy who’s been around a long time and watch this app evolve...
The internet existed long before the web and there are other protocols than HTTP - TCP/IP, UDP, SSH, telnet, etc. Desktop applications are still built that use the internet for communication but don't try to ram everything through an HTML document. And they're usually far-superior to webapp versions.
But now many people think that things should all be web-only, over HTTPS only. We built a palace of many ports and protocols, but we've locked ourselves away in one bedroom as our own prisoner. Despite a perfectly nice dining room with silverware and dishes, we instead scoop our food off the mattress with our hands because a firewall or NAT might prevent us from getting to the dining room.
Where the web does claim superiority, and why everyone now wants to use it to build applications that are totally unsuited for it, is primarily four things:
1. Run-anywhere cross-platform compatibility. This could be addressed by better cross-platform compilation and cross-platform UI/UX for native applications. Most mature languages have that ability now, but it's not perfect - still in the 'needs work' phase. Likewise browser compatibility and responsive design are still not perfect, but they've come far enough now that they're workable enough. But, run anywhere fails if there's no internet connection or the servers aren't responding. Native still wins there.
2. Simplified distribution and updates. People like that no installation is required and the latest version of the software is distributed from the server every single time a page loads. But in reality almost all modern native software can be built with a simple 'click run to install' installer, and can handle routine updates fairly seamlessly. Native is still more efficient, it just has those two extra 'click to download' and 'click to install' steps. If that could be streamlined, native would win.
3. Ubiquitous acceptance of network requirement. It's unthinkable to block 80/443 and HTTP/HTTPS, so anything can communicate that way. Programs that use other ports or protocols may have trouble with firewalls, NAT, and other middleware. It's kind of insane to limit everything to one or two ports and protocols. That needs solving. That's where the web really wins - only because we've imprisoned ourselves.
4. Server dependence. This is not a feature for the customer, although 'cloud storage and synching' is sold as such. It means that the company making the software gets all of the data. If their servers are ever shut down, or even if you're just temporarily offline, you don't have access to your data. And if someone breaches their system, then your data is effectively public. Local native apps leave you in control of your data and can work with it offline, even after the company that made them goes out of business. True, your system can be break (so keep backups) or be breached, but it's more under your control and less of a target than a system containing everyone's data.
Overall, the web is great for document distribution, but it's only real winning point when it comes to applications is that we've locked ourselves into one single port and protocol out of all that are available and that one happens to be the one that the web uses. If we could solve that, internet applications could be worlds better. But no new javascript framework or CSS compiler will solve that.
You know what would replace the web app if it was replaced today?
Some corporate locked down solution subtly or unsubtly controlled by a single conglomerate or interest group.
Recently certain corporations have been whispering about replacing the web standards with something "better". At the same time as they have been pushing free our-platform-only "internet connectivity" in developing countries. I don't want to name names since multiple corporations are implicated but for the sake of simplicity let's call the imaginary placeholder company "Facebook".
At the same time we literally JUST had a major split in the fabric of the internet with the EFF leaving W3C over DRM and now this is the top-rated comment on Y-combinator?
Venting frustrations is one thing, but anyone seriously advocating for replacing the web standards at this moment in time is either ignorant, ethically bankrupt or a corporate shill. Yes I know: Your mental internet filter has been finely tuned through years of weathering forum flamewars to stop reading any thread after encountering the word "shill" but please let me explain.
This is the first time in the history of the world that humanity has achieved a single standardized application platform supported by all major devices! If that wasn't enough we now have amazing code collaboration tools like git(hub/lab/etc) and `npm publish`, to the point where the hardest part of writing a new web app often comes down to finding the right libraries and sticking them together. This is fucking amazing!
Today's web is a land of unicorns and rainbows compared to what any sufficiently pessimistic human being would have predicted when the internet began. The technology used by the world for most of its communications is largely based on globally accepted standards and open source software!(!!).
Keep in mind that this is despite a global economy that has been trending toward increased corporate control by a decreasing shortlist of major players. In short: Despite the fact that the rest of the world currently appears to be mostly made of burning garbage, web developers should be dancing in the fucking streets!
If there are problems with the web then please remember: It's still the early days of the web and we've only recently begun writing very complex applications for this platform. We'll keep improving what we have and every year things will be better, but it is also always going to be the case that humans will push technology as far as it will go, so if you feel like web technology always sucks then that just means that you're always working at the very edge of what's possible with the state of the art. Changing platforms won't change this fact and the bleeding edge will always be... bloody.
If anyone thinks that throwing away the world's only common application platform because "development is hard" is a good idea then maybe they should try writing a UI-heavy app supporting Android, iOS, .NET and *nix with one-click install and high security, without using any web technologies, and then come back and tell me that this is a better way.
Now let me predict the future:
What's going to happen is that Facebook will come out with some new app framework based on React (or React Native) which will compile to current web standards but also to the new "Facebook browser" (they won't brand it as a browser but rather as a new part of the internet that has been missing until now). They will get more and more people developing for this framework since it makes development less painful (at least for the younger web developers who are fresh out of their corporate sponsored bootcamp and have only ever tried this one framework) and when they get enough developer market share they will start adding more and more "facebook-only" features which will enrich the experience for people using their "browser". Keep in mind that I am still talking about a metaphorical Facebook. Maybe it will be a Facebook/Adobe/Amazon/RIAA/MPAA conglomerate "standards" initiative or some such multibeast.
Anyway: Because "Facebook" is actively developing this framework in-house at the moment they've been pushing public opinion against current web technologies in preparation for launch (honestly given who they are and their available resources they would be incompetent if they weren't).
They were planning to launch this cross-industry collaboration and framework after the W3C DRM incorporation failed to pass, using the fires of industry indignation to bootstrap a corporate replacement for web standards, but now that they actually succeeded in undermining the W3C once, they will simple continue undermining web standards via the W3C while the FCC and the rest of the world is left to attempt to start a new standards organization out of the ashes, and let's face it: The web standards were created when few people cared about web standards and the feat would be very hard to re-create without heavy industry support now that there are so many powerful stakeholders.
I know this post will most likely be buried but at least I'll get the bitter satisfaction of linking to it and saying "I told you so". Or maybe I'll learn not to be so fucking pessimistic. Either way it's a win.
Internet is at risk of becoming low level plumbing of the snazzy house of proprietary app world. With the advent of app-only companies and products, internet, as we knew it, is slowly taking the backseat. App world is full in control of their masters and it is a very snobby world. Biggest irony of the sharing economy is that apps don't like to share, linked to, looked inside. This world does not have a concept of hyperlinking, a basic premise of the internet. It is surely very un-internet like. It all seems designed to lock in the users to handful of apps and make them so myopic that they don't even realize that there are options.
Let me take a step back. Internet, in my view, is the ultimate manifestation of FREEDOM. Everything is/was free:
* Access to internet is free after you have paid your ISP. Almost everything that has been digitized is available on the internet for free. You could change ISPs and everything still worked.
* There was hardly any government control over the internet. They wished. However, it is designed in such a beautiful way that there are very few central systems. This makes the internet very tough to control (unless of course you are China).
* The real estate on the internet was also very cheap. You could buy a domain name in $10, a cheap server in $5 and go online with your site.
* There was no limit on number of sites you could visit. These sites could not steal your data. They could store some of their own data at your end but not steal much. Once you close the site, they cannot send you any popups or notifications. They cannot run in background and monitor your activity. Track your location, speed, acceleration etc.
* Better still, you could write blog posts which millions could read and cost you zilch. There were these things called RSS feed, which made it even unnecessary to go to sites to read content on them. You could just subscribe to RSS feeds.
* In fact, you could link to other people's property and it was encouraged. People who visited your site, could easily hop to any other site you linked to. You did not have to pay anything for it.
* HTML was written in a way that made even sloppy code work. HTML was so dead simple that anybody could make a site in it. No lock-in. Almost all code written for one browser worked in all browsers. There were tonnes of browsers. This sloppy code could render on almost any device and browser. Again no lock-in. You could look into the html, css and javascript code of any site. It was free for all. Internet was the ultimate open source.
May be internet was too open to make money. So 'they' invented the app world. App economy is a dream for big companies. Huge user base, free & rich media push notifications, ability to steal the ultimate of user social graph (call logs, sms history) and on-ground sensors to enable steal very personal data of users. Lets have a look at this world as to how it compares to the internet.
* Internet fast speed lanes, internet.org, anti net neutrality deals. Enough said.
* Apps do share the internet. However are themselves in control of one company which makes it, one company which distributes it.
* Apps have already made it impossible for a part time hobby dev to produce and maintain 3-4 different apps. Hardly anybody I know, knows obj-c and java both very well.
* Apps have made it difficult to have more than 20-30 of them on your phone. More than that and your phone would be left with no space. Once these select 20 are there, you are locked into them. They steal your data and periodically push you notifications! A we just love them.
* We first managed to kill RSS. I remember there was a huge campaign one time which demeaned RSS. Google then killed reader for no apparent reason. Is the internet world a puppet show ?
* Apps cannot link to other apps. You cannot link to particular page/screen of particular app in a generic way unless the other app wants it and allows it. There exists no generic way to do it. The standard way could be that you talk to the other app dev, sign a contract with them and possibly even pay them. Linking is dead.
* Apps are not free. They are locked in to a platform. If you want to port your code, you would need to rewrite the whole code base. (hybrid apps don't seem like they are happening)
I think we are witnessing end of internet as we knew it. Companies are suddenly trying to kill browsers and generic internet. They are trying to invent a proprietary walled garden internet.
I think that most of what's being complained about here is that it's very difficult to write secure web apps that allow the typical business model of web apps to work.
If your business model is getting attention, sharing user data, and pushing ads everywhere, it's hard to leak some data to unknown parties without leaking all data to unknown parties.
It's not by any means trivial to completely lock down a web app. But it is possible to do it well enough that attackers don't bother with technical attacks. Social engineering is an easier vector. And that will always be the case after you close a certain number of holes. That number gets larger over time, but as new attacks get discovered, good frameworks catch up and at least encourage you to close them if not outright doing it for you.
Something something it's unsurprising that a person doesn't understand a point you're trying to make when the person's paycheck depends on them not understanding it.
Same thing with web apps. It's unsurprising that web apps are insecure when the business model for most of them depends on them being insecure.
I'm not arguing that anything is hack-proof. But according to the article, it's impossible to have perfect security in a web app, so let's burn it all down. My counter to that is this: it's impossible to have perfect security anywhere. On any platform. Everything is hackable. Users most of all. So since we're complaining about how impossible this is, then we should all shut down our computers, go home, and find another way to make a living.
That's not going to happen, and it shouldn't happen. But what we can do is take a close look at why the security measures we can deploy are typically not: in my opinion, it's very often a business decision more than it is an engineering failure.
When security comes up as a topic in native platforms, many technologists seems to be willing to take a hard stance: any back door, no matter how well intentioned, will be abused.
Web apps that depend on ad dollars are the definition of back doors.
Here's an idea: create a product that people want and charge people money for it. It simplifies your security model enormously because you don't have to choose what to leak to whom. You treat every leak as an existential threat to your bottom line.
With that as a driving mandate, limiting attack vectors to gaming users becomes a lot more doable very quickly. Then you move on to educating users.
Security of private information and money isn't a new game. People have been finding ways to steal property since the beginning of recoded history.
We're being pretty stupid if we think it's new problem. Do people call for banks to shut down because it's possible to forge a check? Call for the Fed to shut down because it's possible to get robbed? Of course not.
But when a fundamental part of your business model is stealing from people, it can't be a surprise that other people besides you are also stealing from them.
I feel like the quote about democracy being the worst system of government except other could easily be adapted to the webapp platform. It certainly has a plethora of issues, I am not trying to gloss over that in the slightest and it would certainly be logical fallacy to assert just because it's the most popular platform it is therefore the best.
However, A lot of what appears to be recreating the same computer technology over and over is actually not recreating but I think selectively rebuilding platforms, picking up pieces of tech and ideas from the junk pile of previous ones and seeing now they might actually be workable. This process also allows discarding the dead weight of bad ideas no one bothers to pick up and bring in.
It's like we do development by tossing useful ideas into a house. Then when they house is literally bursting with the all the orthogonal junk we've tossed in it over the years, but which programs have come to rely on we say ... well that place is a mess, let's start a new house, and we repeat the cycle, taking the best pieces from the old house as we have the resources to carry them over. Sometimes we have a particularly good and consistent plan of what we want in the house, so we can put up a lovely one ... initially. However inevitably, as is the case when you are still discovering new ideas and techniques as we go along, you notice your neighbor with some hot new feature, and rather than lose ... um guests (this analogy is getting pretty stretched) we say, ok we'll add that I guess, not wanting to be hopelessly outdated and lose out on that cutting edge of cool features and dragon chase of increased productivity and nifty little bits of syntax sugar or cool tricks or whatever.
So existing platforms typically can't be hoisted wholesale onto other platforms anymore than a house full of junk can be moved from the country to a different climate on a different foundation. It just usually doesn't work well because the hoisted platform is typically at a stage of high refinement to it's niche, built on a several assumptions, even a few of which failing wipe out the ability of the hoisted platform to function either outright or at an acceptable level.
I think that a variety of competitive pressures from large players capable of making feature bloated browsers is the best we can reasonably expect from a societal system built fundamentally around competition and not collaboration, cooperation, and coordination. Even still we face large locked down platforms like iOS where you cannot run any web rendering engine (e.g. firefox / chrome are just glorified safari browsers due to apple app store policies forbidding custom rendering engines like gecko or blink) or interpreter you please, or my pet peeve even any url with a %s to act as your user set search engine (only a list of 4 options is available), which puts a strong gate in front of users freedom to choose how and what they run on and interact with their pocket computers.
Where am I going with all this? I guess webapps as a platform, as bad as they are only exist as such because they developed as a race between different browser vendors to one up each other with cool features. This process is like throwing useful stuff into a house. Getting each of several different platforms to agree on a standard for applications and then implement that in a consistent portable way seems almost impossible to me.
This isn't really about killing "the web" though, it's about changing web development as we do it today. At least, when I read that title I assumed it was about the web as a whole, not just web apps and the way we develop them.
Ah yes: this open, ubiquitous platform used by hundreds of millions on a huge variety of devices, let's get rid of it. I hope this person is not one of those techies who claims to love "boring" technology.
What browser are you using and how did you disable the js? With firefox esr and with either ublock origin (dynamic mode) or umatrix to disable the js I am unable to see the images.
It is a triumph of programming ingenuity that programmers have been able to accomplish almost anything via a "web browser".
Anyone can argue the benefits. If if there were few benefits the novelty alone might be enough.
But does anyone ever consider the costs?
The analysis I have in mind is: costs versus benefits of using a web browser to do x, where x is anything and everything, no matter how important.
The "costs" are not costs to the programmer to implement but costs to users, e.g., risk of having their personal data stolen.
To give an example, weigh the benefit to Equifax customers in having their data accessible through a web browser versus the cost of having their data exfiltrated without their consent.
Or, weigh the "cost" of having to dial a toll-free number to order a credit report and not have one's data stolen online versus the "benefit" of being able to order a credit report with a web browser and having that data stolen online.
Websites can be used to effectively disseminate public information, with relatively little security risk. For example, djb's tcpserver and httpd to serve static web pages. In continuous use since the 1990's, these have never had any security issues to my knowledge. IMO, this level of software is qualitatively different than software which is released with security flaws, which may or may not be later fixed (sometimes decades later).
IMO, using the web to distribute public information is a benefit that outweighs the costs. I am not worried about static websites, assuming the right software choices are made.
The blog post acknowledges this: "The web has issues as a way of distributing documents too, but not severe enough to worry about."
If Equifax had a static page served by djb's httpd showing number to call to order a credit report, I would be far more impressed than if they were running a "web app" to take orders online that connected to some backend database of user data. Because for that specific use case, a very limited use of the web is the smart thing to do.
I would like to see more people opining that, for "serious uses" i.e., where the risks to the user are potentially serious, the web has limited utility.
The current thinking seems to be that the web has unlimited utility. For everything. We all know that with enough effort the "web browser" can be used to accomplish almost anything.
I remember an RFC many years ago from Marshall Rose that said something like "the web is the new waist". I also remember in the early 1990's, people were afraid to send credit card information via web forms.
"Unlimited utility". Today, many young people, including many programmers, see no difference between internet and web. They are synonymous.
"Unlimited utility". Maybe utility should be weighed against costs such as security risks.
IMO, the web has limited utility.
Would you sacrafice a little convenience, e.g. option to order a credit report online, if it meant your data was not part of the data stolen from Equifax? I would.
The Web is fucking awful, and our industry's rapid acceptance of it represents a massive moral failing across all of us. We have ruined everything, and we need to take responsibility for our actions.
This article came off as a big whine-fest... Yes, security is an issue. It always will be. Yes, we need to find a better way.. But without proposing a viable alternative or solution, it's just blah-blah, complaint-complaint.
Every negative thing said about the web is true of every other platform, so far. It just seems to ignore how bad software has always been (on average).
"Web development is slowly reinventing the 1990's."
The 90s were slowly reinventing UNIX and stuff invented at Bell Labs.
"Web apps are impossible to secure."
Programs in the 90s were written in C and C++. C is impossible to secure. C++ is impossible to secure.
"Buffers that don’t specify their length"
Is this really a common problem in web apps? Most web apps are built in languages that don't have buffer overrun problems. There are many classes of security bug to be found in web apps, some unique to web apps...I just don't think this is one of them. This was a common problem in those C/C++ programs from the 90s the author is seemingly pretty fond of. Not so much web apps built in PHP/JavaScript/Python/Ruby/Perl/whatever.