Hacker News new | past | comments | ask | show | jobs | submit login
What Web Can Do Today (whatwebcando.today)
354 points by porsager on Nov 29, 2015 | hide | past | favorite | 127 comments



What the web is increasingly unable to do today: provide text content without requiring a code execution environment. This site is another example of that.

All non-application websites should provide all their content in semantic HTML at appropriate HTTP endpoints, with CSS styling (in as few requests as possible) as required per the design, and JavaScript (in as few requests as possible) that takes the semantic HTML and makes it interactive (potentially adding and removing elements from the DOM) as required per the design. The CSS should not depend on mutations resulting from the JavaScript, nor should the JavaScript assume anything of the applied styles (as the user agent should be able to easily apply custom user-styles for your site; e.g. Gmail only providing a limited set of styles that are managed server-side is laughable).

Thus, all content is readable and styled properly without requiring an arbitrary code execution environment. That is what the web was meant to be. Unfortunately, most "web developers" have made the web worse over the past 10 years because simple, functional, minimal technology is not impressive, and hipsters love to show off.

Nor does it help that there are few capitalist incentives for the web being open and malleable -- e.g. so users can easily use a different front-end for Facebook, or users can easily choose to avoid analytics or advertisements, or users might prefer to use the website rather than the app (providing access to personal details, contacts, location, tracking, etc).

The state of the web is emergent and I'm not sure what anyone could do about it (perhaps make a better browser?), but it really irks me when web developers pretend like they're actually doing something good or useful, or that the web is actually in a healthy state. In my experience, it's the people who don't talk about web development who are the best web developers; these are the people who don't wince when they write a HTML document without a single `<script>`.


You're talking about "progressive enhancement". It's a romantic idea, but it never happened, probably because it's too hard and the cost is not justified given most users run with their browser's default settings.

The precursor of the web made by Tim Berners-Lee dates back to 1980, but it was not based on HTML or HTTP. These happened later in 1990 and early 1991. But then CSS happened in 1994. And Javascript happened in 1995 at Netscape, but then Javascript was completely useless until Microsoft came up with the iframe tag in 1996 and then with XMLHttpRequest in 1999, which was later adopted by Mozilla, Safari and Opera. And people still couldn't grasp its potential until Google delivered Gmail in 2004 and Google Maps in 2005.

Not sure what the "the web was meant to be", we should ask Tim Berners-Lee sometimes, but in my opinion the web has been and is whatever its developers and users wanted it to be, with contributions from multiple parties such as Netscape, Microsoft, Mozilla, KDE/KHTML, Apple, Google and many other contributors, being a constantly evolving platform.


It was Microsoft with Outlook Web Access that really was the first big example of what was possible. I worked with several folks around that time and we were doing "rich" web apps in IE5 with XML data delivered from the server via xmlhttp and building the presentation in the browser with XSLT. It was really slick, especially at the time, to be able to filter and sort, do summary and detail views, etc. all without a page refresh.


CSS happened in 1996, the first couple of years of the web we had no CSS, all styling was done with inline attributes up to that point.

The frustrating bit is that CSS was supposed to separate content/structure and markup, but now we have pages without content but with markup where the content is loaded after the fact. This really overshot the mark.


Also note tat the separation of content and structure is mostly nonexistent as it's currently practiced. People like to talk about it as a Golden Rule, a holy virtue, and then proceed to write even worse code than they did with tables. Generally, if you find yourself writing a tangled mess of divs in order to support CSS tricks, you're not really separating content from the structure. HTML5 semantic tags helped a little, but I still rarely see them used live.

(I'm not really convinced that separation of form and content is a feasible goal anyway - some elements of form are also content at the same time - but it's still a good goal to have.)

> now we have pages without content but with markup where the content is loaded after the fact

This is absolutely ridiculous and it, combined with the idea of routing everything through the cloud in IoT / home automation applications, makes me wonder what happened to some good old-fashioned engineering sanity. It's like people are trying to create wasteful and insecure systems on purpose. ("And what would that purpose be," - the cynic in me asks - "maybe monetizing people's data?").


Have you considered that there may be an actual reason why people write a tangled mess of divs. Could it be because the entire model is crappy and people don't know what to do to make it show things the way they (or their client) want(s)?


In my experience, as a front-end developer, the usual reason I need a mess of divs and spans is to support a design that's not a good idea for a web page to begin with.


What is a web page, and why would a particular design not be a good idea for it?

If a web page is not a good idea, then what other technology should be used to achieve that design, as well as remain equally easy to distribute?

This mindset is precisely why native mobile applications continue to exist on the market, and why articles like this one fail to convince me. Nobody (except perhaps Facebook with React) tries to really fix the web to meet the demands its been given. Instead everyone insists that the demands should change to meet the original vision and limitations of the web.

Regarding the HTML/CSS interplay, flexbox gives me some new hope that reconciliation is possible (getting both semantic markup and powerful / precise styling).


It's easy to find designs that are bad ideas for web pages. The most obvious culprit is a design that's excellent for a magazine layout. Such designs typically don't take into account the document flow nor the idea that different people open their browsers to different resolutions and browser window sizes. Heck, some won't even take into account different browsers with different capabilities. I think I understand where you're going with the "what is a web page?" bit, but it doesn't necessarily apply. Especially since a web page can be whatever anyone wants within the limitations of a browser, but that doesn't mean a design works based on some person's idea of what they think a web page is. In most cases a design is limited by the browser, not necessarily by the code of the page.

I have no idea what you mean by your second sentence.

And then, are you referring to my mindset or some other mindset? Because I fail to see how you can know my mindset on the matter. If it's the other, I would agree. Except I would say that "fixing" something in terms of making it do something it wasn't intended in the first place may possibly create more problems than we are attempting to fix. I would suggest attempting new things to see if they work, which there are some doing just that. I would point out that flexbox is an example of this.

I'm excited with flexbox and have started pushing to make use of it more often, when warranted. It's hard to switch current code to it, but I think it's worth it. But, in the end, someone will eventually start complaining about its limitations and that it needs to be "fixed". Then we'll be back where we started, it's inevitable.


My second sentence was the entire point. If a web page is not a good idea because it can't support certain designs, then what would you suggest instead (that also has the other properties of the web such as easy distribution and is ubiquitous)?

I was referring to the whole mindset of "the web was not meant to do that". Well, yeah, it wasn't. But its already doing that, so we have to do something to make it properly rise up to the challenge.


> This mindset is precisely why native mobile applications continue to exist on the market

And long live them because there is little about the mobile experience that's more infuriating than a web app that should have been made as native. The assumption of being always on-line is baked too deep into the web stack; I have yet to see a well-made web app that could not be made significantly better UX-wise by simply going native.


The UX problem isn't a matter of being online, but of not having a proper mechanism to express application UIs and styles (rather than document UIs and styles).

There are many PhoneGap applications there, and they don't make any "onlineness" assumptions. Their UI does however still suck and does not behave as expected.


> it's too hard

It's only hard if you want it to be hard. Your tools should be handling most of it for you. If they don't, pick tools that aren't broken or badly designed. When I was writing websites in Rails 2.x, progressive enhancement was usually automatic (same views are rendered as a page or a dynamically-loaded partial).

Saying "It's hard because I want to write over-complicated pages with badly designed tools" isn't good engineering or good design.

> most users run with their browser's default settings

How, exactly, do you know this? If the answer is "analytics", you are missing an increasing amount of data.

> constantly evolving platform

Which is why you progressively enhance pages. Those of us that disable javascript for safety usually get blamed when this topic is brought up, but the main reason for progressive enhancement is that it's defense in depth. You don't know what the browser is, what options it has set, what bugs it may or may not have, or if extra page assets even made it successfully over the network.

Not bothering with progressive enhancement is shoddy programming for much the same reason you shouldn't skip the test for NULL after calling fopen(3).

edit: grammar


I don't know a single person IRL who turns off javascript. It's the browsing equivalent of running only RMS-approved software, possible in theory, but not very practical, and definitely rare. The business needs of modern commercial websites are hard to build with progressive enhancement, and web apps basically require javascript to deliver a good user experience. Yes, you can build a personal website using progressive enhancement, and at one point I did, but I had to give up my progressive enhancement ways when i became a professional web dev because it was just not practical to do otherwise.


Web apps can be excused, but most web sites? Not really. What exactly happened that made it not practical to write simple sites? Did browsers suddenly stop rendering HTML unless you generate it in JavaScript?

It's not business needs, it's cargo-cult web development. Going with the latest trends without a moment to stop whether it makes sense or is actually what the users want.


What exactly is the issue with having the HTML generated with javascript? You can still run "view source", overridden CSS or Greasemonkey on it...


Why do you need to generate HTML with JS in the first place? There rarely is a need to do so on a website. There's definitely no need for the e.g. loading content of a blog post dynamically just to achieve a fade-in effect[0]. And yet, most of the modern web practices is silly stuff like that.

[0] - https://news.ycombinator.com/item?id=10646025


It's not only people who actively disable JS, though. See the How many people are missing out on JavaScript enhancement? blog post[0] by the UK's Government Digital Service (aka GOV.UK).

They calculated that 1.1% of users don't have the JS enhancements activated, and only 0.3% of those were browsers where JS execution was disabled.

[0] https://gds.blog.gov.uk/2013/10/21/how-many-people-are-missi...


I would guess a majority of that traffic is going to be crawlers. It's probably more like a small fraction of 1% who are actually missing out.

EDIT: Looks like they covered this in comments. Even that doesn't convince me for some reason.


Crawlers load images from <noscript> tags? Some might. As googlebot runs Javascript, would an image inside <noscript> be indexed into google image search?

While that's an interesting question, one of my points was about this common type of claim:

> guess

You admit you don't actually know. I don't either, which is why I program defensively and test for any feature I want to use.

A lot of people seem to be projecting what they want to see, reinforced by confirmation bias. Choosing Javascript based analytics is a great way to conclude that almost nobody uses Javascript.


Good points.

https://addons.mozilla.org/en-US/firefox/addon/noscript/

https://chrome.google.com/webstore/detail/ghostery/mlomiejdf...

These are some very popular plugins, so my thinking is maybe the answer lies somewhere in between, where security-conscious users are white listing sites they want to run scripts on. Not that they're running completely in js disabled mode. Even though it's a subtle difference I think it's relevant to the strategy one goes in with regarding noscript tag.

So this would make sense to me. If that's right, that most noscript users are just running these plugins, then those folks know that they're going to miss out with some sites, or they'll selectively enable javascript on a case-by-case basis.

This would probably require more extensive review of logs, to see if the person who originally downloaded the noscript image eventually came back to the site with javascript enabled. The likelihood is this would only happen if the site was not functional when they visited with javascript disabled.


I don't know a single person that does not run an adblock anymore.


I happen to know 3 (two of which I convinced into doing so --- and they're not even what I'd consider "advanced users"), and I am one myself.

The majority of informational sites are actually quite usable without JS. I'd say "the browsing equivalent of running only RMS-approved software" would be never allowing JS, but I'm more pragmatic and only enable it if necessary, for the sites I trust and must use.


The web is whatever we want it to be, but sad that lack of agreement on standards and openness of platforms severely limits what it can be now without a massive sea change. Because collective agreements tend towards entropy, those with the power in the agreements hold on tight to prevent decline. When we become more conservative and restrictive in order to keep collectives in place (open web >> Facebook), it limits our freedom to innovate and development becomes "a fight to maintain" that benefits the few rather than "a forward thinking step-by-step process" that benefits more and more people.


> What the web is increasingly unable to do today: provide text content without requiring a code execution environment. This site is another example of that.

I was about to argue that this website is actually an excellent example of what you seek -- each link has a separate URL associated with it that returns a page containing that content. The links point to these real URLs so they work with "open in new tab" and "copy link" and in browsers without JavaScript enabled, while the JavaScript that runs when you click it changes the page content via AJAX (possibly saving a few round-trips) and updates the current page URL so that back/forward history and the address bar both work just like you're navigating between real webpages.

And this works perfectly in Firefox (with JavaScript) and almost perfectly in Lynx (the table of contents still fills the first screenful, but that's hard to fix since Lynx doesn't support CSS). But it completely fails if you have JavaScript disabled in Firefox.

Every page starts with the table of contents visible and the content collapsed (through CSS). The page then seems to assume that JavaScript will be able to immediately switch the page to the correct view (i.e. the site is broken if you have working CSS but not JavaScript). Navigation to a given page directly should start the other way by default, and to make that happen is just 21 missing characters (` class="page-feature"` on the <body> tag). However, this unfortunate error completely ruins this otherwise beautiful example of progressive enhancement.


> I was about to argue that this website is actually an excellent example of what you seek

> But it completely fails if you have JavaScript disabled in Firefox.

I'm not sure what point you're trying to make.

We've known for a long time that progressive enhancement IS possible, it's just that very few sites bother to design for that. Are you just saying it's difficult and that this site "almost" made it.


The issue when JavaScript is disabled is now fixed, thanks for pointing this out!


It's the Flashification of the web. People who wanted to show off, or code web apps, used to use Macromedia Flash. People complained about it, in part because sometimes if you accessed a site without the Flash plugin you'd see a blank page.

But Flash was great in many ways, and it was self-contained in objects, so websites were mostly still websites. JavaScript had been around for a long time, but there was still a cultural norm that most people respected about not requiring JavaScript. This was mainly because a lot of browsers still didn't fully support it, or people had it turned off. It was also when some people had cookies disabled.

Then once Adobe bought Flash, and then Apple blocked Adobe Flash, it really killed the Flash way, and all that spilled over into HTML with HTML 5 and the new cultural norm of kids who are more concerned with showing off socially than the meat and potatoes of hypertextual information.

It should've been obvious that there was a need for a new web, for code and multimedia. But in the .com boom nobody would dare try to start with something unpopulated, since it'd risk losing their chance at fortune.

Today, instead, maybe we should go in the opposite direction and create a new old web; a hypertext network that specifically only works for HTML, so people can have this one to morph into an app network, and we'll not lose the text-linking place we've grown accustomed to.


> we'll not lose the text-linking place we've grown accustomed to.

It's just a minority of users who are accustomed to the old web.


Rad idea. How would one initate a new network of this kind separately from what exists today? And what kind of interoperability would it have? Can't imagine it could go cold turkey!


Thus, all content is readable and styled properly without requiring an arbitrary code execution environment. That is what the web was meant to be.

In other words, it was supposed to be a worldwide hyperlinked document library --- and we have mostly achieved that goal, although it is a library wherein you are constantly tracked and bombarded by books flying off the shelves at you, screaming at you to read them, and most of the books consist solely of ads with very little useful informational content.

In my experience, it's the people who don't talk about web development who are the best web developers; these are the people who don't wince when they write a HTML document without a single `<script>`.

Agreed completely. The ones who write information-dense HTML pages, often by hand, would not be considered "web developers" nor would they consider themselves to be; but they are what the web needs most. I've done that, and I don't consider myself a "web developer" either.

it really irks me when web developers pretend like they're actually doing something good or useful, or that the web is actually in a healthy state

I wouldn't doubt that they genuinely feel like what they're doing is good or useful; I've noticed the appeal of "new and shiny" is especially prevalent in the web development community, with the dozens of frameworks and whatnot coming out almost daily, proposals of new browser features, etc. Very little thought seems put into the important question of whether we actually need all this stuff. It's all under the umbrella of "moving the web forward", whatever that means. But I think we should stop and look back on the monstrosities this rapid growth has created.


Against what platform should we compare the web to decide what it should look like?. I think everyone has a different criteria.


Strong opinions on how things should be, but no arguments for why.

That is never going to convince me.


It's because his post is entirely opinion, albeit one that frequently shows up at the top of HN comments sections because it's such a popular opinion in the hacker community that we forget it's not actually the majority's opinion.


I tend to think rants like this are just being luddites, but all this JavaScript and external resources rest are ruining the experience of the web. Holy crap is everything slow now. On desktop we've all gotten into the habit of tabbing the new stuff and waiting for load while doing something else, but on mobile where that workflow isn't as easy and you have to watch a page load? I'd say HN is one of the very few sites I can stand anymore.


I encourage you to try the "NoScript challenge" - disable JS, enabling it only for sites you often use and which need it, and browse the Web for a week. If you tend to indulge in rich multimedia you're probably going to give up soon, but if, like me, you're just after text and image content the majority of the time, you might actually prefer it. Pages load nearly instantly (or not at all), there no more intrusive popups/popunders/slideovers/etc. and other annoyances like disabling right-click, text selection, or stuffing text into your clipboard.

The "almost all sites have JavaScript and will break if you disable it" statement is common, and while I agree that the vast majority of sites do have JS on them, whether or not the stuff that will "break" on them if you disable JS is actually of any use to you is questionable.


I think this is because so many websites try to shove ads and tracking crap down your throat.

It's not even particularly difficult to pull off for certain websites. For example, my blog uses React and it does server-side rendering, so it'll work without JS at all. However, if you do have JS enabled, it'll let you avoid doing full-page reloads to navigate around. I totally agree that for content-focused websites, you tend to get a better experience by limiting gratuitous JS abuse.

However, this isn't very feasible when you're building highly interactive applications. In the case of something like Facebook, they do have a version of their app that works without JS... But how many people can afford to maintain multiple versions of their applications?

I think a good compromise is achievable by well documented APIs or even better with a public GraphQL schema! If I don't use magic APIs to build our frontend app, you can build a different frontend that's tailored for your needs.


What we need is a new protocol: something that lets an author write and publish text documents, marked up with basic styling, with "hyperlinks" to other text documents. The protocol could allow embedding of simple inline figures as well. Users would run a "browser" whose function was limited to requesting and displaying these documents.


You mean like AMP?

https://www.ampproject.org/

And yes, it's pretty silly that this is a thing.


> Unfortunately, most "web developers" have made the web worse over the past 10 years because simple, functional, minimal technology is not impressive, and hipsters love to show off

No it's because when I go into work someone says to make it a certain way and if I want to get paid I have to. If you want to blame anyone blame designers who see proof of concept stuff from developers and throw it in designs.


This comment reminds me of Maddox and his 90s' style html blog: http://www.thebestpageintheuniverse.net http://maddox.xmission.com/


Why is the Network Information API still a thing? I don't feel like it should even be listed on this website, as it's an anti-feature. It only encourages discrimination on connection types, as it doesn't expose the only important part: is this connection metered?

Developers are going to use this API to serve me higher resolution assets over my metered WiFi & Ethernet connections (assuming they are unmetered) and lower resolution assets over my unmetered cellular LTE connection (assuming it's metered).

WiFi vs cellular vs Ethernet is not important. What's important is the usage policy behind the connection, and in its current state, the Network Information API can only be used to harm my user experience.


How can a web browser or even native OS determine if a user's wifi network is metered? I agree it would be nice in theory if there was some kind of "dollars per MB" API, but I don't see how web browsers could implement that in practice.

I work on a podcast app. When people are downloading hundreds of MB or more daily, they emphatically want control over when downloading takes place. In practice, almost all of them are satisfied with a binary choice between cellular or not. Those who do sometimes use expensive or slow wifi networks can blacklist them, which is also what network info APIs support.

There are also other applications such as metering apps, utilities that trigger actions upon network changes. Some very popular Android apps (e.g. Tasker, IFTTT) rely on network info for that; web falls further behind if web apps can't access that data.


Through DHCP, the DHCP server can set different DHCP-options to notify clients what type of connection it is. For example: Android Phones set up to be a hotspot will set DHCP option 43 to a value indicating that it's a metered cellular connection.


I had no idea DHCP could even do that. What do Android devices do on unmetered cellular connections? How do they know if the connection is metered or not?


In android settings you can set if your connection is metered or not, and set thresholds / warnings.


In Windows it's possible to set a wifi connection as metered. I've used this when tethering to my phone. So this info could be passed into the browser.


Usage policy is not necessarily metered vs free. It's a mixed bag of user expectations, developer needs and technical properties of the connection. Connections are by no means only unmetered broadband and metered LTE. I have recently experienced several connection "quirks" that require different approach:

  * punishingly high RTT
  * packet-per-timeframe ratelimit
  * throttling by queueing packets
  * unstability (some requests get served in tens of ms, others get lost in void)
  * connection jumping from one provider to another (e.g. mobile/wifi while next to home)
For example, jaggy GPRS in the middle of nowhere can mean two things: I working in a field and need to check something as quick as possible without ads and other unnecessary cruft hindering with that, or I am resting in a cabin and am willing to wait for a "proper" app to load.

I may also have two connections: fast metered, slow unmetered and would like applications to load important stuff fast over metered network and cosmetics over unmetered one. Too bad I have to pull whole javascript framework first in order to see anything, because web components, and it's uncacheable because it's webpacked with application code.


I think this is important in case of 2G/3G/LTE distinction. Where phones that can access only 2G networks should get served "light" version of websites.


This use case is already handled by navigator.connection.downlinkMax. There is no reason to include navigator.connection.type in this API if you want to serve different content based on downlink bandwidth.


I think it's likely that most people would prefer the "light" version.


Absolutely not, have you tried 0.facebook.com ? Ux nightmare !


Network type can be used for more than just serving higher-resolution assets. It can also be used for doing aggressive pre-fetching of assets that may not be needed when on wifi. For example, my native RSS client will pre-fetch images from blog posts when it refreshes on wifi, but won't do that on cell. It's not unreasonable to think that various web apps might want to do the same sort of thing.


That's the same issue though. Your RSS client is assuming that all cellular connections are metered and all WiFi connections are unmetered, so it intentionally uses less data when you are on a cellular connection.


Which is usually a correct assumption. But it's more than just metered, it's also assuming that my wifi connection is likely to be faster and therefore downloading a bunch of assets in the background will have less of an impact on my available bandwidth.


'usually' you mean in the US?

While I was in the UK I had unlimited data on my mobile but a 100GB limit/month on my landline connection


Metered landlines aren't uncommon in the US, especially in areas with poor telecom infrastructure. I think if your app is worried about amounts of data relevant for metered cellular (which can be in the 2GB range in some places, including the US), the metering on landlines is virtually infinite in comparison.


I know enough people who's mobile data is both faster and cheaper than their home broadband that I cannot agree to your statement that this is usually a correct assumption.


More stuff the web can't do that native apps can do:

* Request stuff from arbitrary URLs without a CORS proxy

* Open TCP and UDP sockets

* Be in the play store/app store AS a web app (home screen installation isn't enough; people still search for your thing in the app store, so even if the other problems aren't issues for your app, you still have to wrap your web app in a dummy native framework and upload it if you want users to discover you)

* Scroll like a native app without Safari's stupid rubber band scrolling the entire app at very erratic times

* Barometric sensors

* Background geolocation, background anything really

* launchActivityForResult so you can work with other native goodies

* Intent resolution for native actions

* Geolocation when the user has inadvertently blanket-blocked Geolocation for all of Safari instead of per-webpage. Thankfully Chrome comes pre-approved on Android and you can't accidentally make this mistake as a user

* Face tracking and other OpenCV-based stuff (it's been done, but JS is still not fast enough on mobile to handle these jobs)

* Display long lists of styled content and scroll without stalling

Still though I think touch gestures is really the killer missing feature. There isn't any canonically-supported way to do things like a simple Android ViewPager or pinch-to-zoom. You end up implementing a bunch of spaghetti code to do these things even in the best frameworks (Meteor, Phonegap + Polymer, Angular et al.). And then you find out the way your spaghetti code reacts is a tad different from the way someone else's spaghetti code reacts to the same gestures. This stuff really needs to be standardized on a OS, browser, or at least JS-framework level.


I'm not saying that native apps > web apps isn't true, but…

> * Request stuff from arbitrary URLs without a CORS proxy

CORS isn't a proxy; it's a browser policy/algorithm that allows a site the opportunity to say "yes (or no), other websites may (or may not) make AJAX requests here". The native app (unless you've informed it somehow, or it has done something malicious) doesn't have the user's cookies, whereas the browser does, and needs to be a bit more cautious.

> * Open TCP and UDP sockets

There are websockets, which aren't the same, I'll admit. Frankly, I'm not sure I want my web pages to be able to arbitrarily open TCP/UDP sockets. (I don't really want my native apps to be able to on whim, either…)

> * Barometric sensors

You mean like current air pressure? Are there common devices out there with these? (None of my computing devices have anything like this, for example.) (and what would I want this for?)


> CORS isn't a proxy;

I don't think the GP was implying that.

If you don't control the upstream content you don't necessarily control the CORS settings. Hence the need for CORS proxies to work around an ideal introduced by the major browsers in the last few years.

In the real world you may well have the right to access third party content, but in many cases you require them to expend engineering resource to perform the very simple task of enabling a simple set of headers, so end up having to proxy content.

Take Chromecast for example - it takes CORS to the extreme requiring it set on HLS manifests and .ts video files, meaning any proxy solution has to take the full cost of proxying the full video content. MP4 files don't have this restriction, nor do any browsers.

To me it very much feels like the web is splitting into two - Sites that you stumble upon and want completely sandboxed, and those you come back to repeatedly and provide enough value for you to click "yes, have my location"..


>> * Barometric sensors >You mean like current air pressure? Are there common devices out there with these? (None of my computing devices have anything like this, for example.) (and what would I want this for?)

Most modern phones have a barometric (yes, air pressure) sensor included. Having semi-accurate altitude data helps the GPS receiver obtain a faster location fix, but it can be used for other things as well.


> Are there common devices out there with these?

Most Android devices since maybe late 2012 have had one. It's used for enhancing GPS lock, and whatever developers can dream up. All I've heard about it being used for is a distributed air pressure sensor network used for weather prediction, tracking, etc. [0[

0: https://www.pressurenet.io/


Yes. The barometric sensors in the iPhone 6 and most high-end Android phones of past 2 years are sensitive enough to determine +/-1 floor of building movement, among a whole lot of other things. Very useful if you're writing a hiking app, want to determine if the user is in an airplane or not, trying to do vertical geolocation in a vertical city like New York or Hong Kong (subtract user's barometric reading from the weather station and you'll have it accurate to +/-a few floors), and a whole lot of other endless possibilities.


> * Request stuff from arbitrary URLs without a CORS proxy

I think OP probably means a proxy server on your domain which can make arbitrary requests to a third party, in order to get around CORS restrictions:

  browser <--[XHR]--> my-server <--[HTTP]--> third-party


I think the parent knows exactly what CORS is and you're misunderstanding.

There's no fundamental reason browsers can't be configured to not send the user's cookies when making XHR requests, and in fact there's already a mechanism for a webpage to ask certain kinds of CORS requests to be "anonymous":

    The "anonymous" keyword means that there will be no exchange of user
    credentials via cookies, client-side SSL certificates or HTTP authentication
https://developer.mozilla.org/en-US/docs/Web/HTML/CORS_setti...

Because of this arbitrary technical limitation, if a webpage wants to make a request to to an arbitrary URL equivalent to what a native app could ("anonymous"/cookies-free, of course), it'd have to make an AJAX (or WebSockets) call to an intermediary server running native code to do so, which is what the parent is referring to as a "CORS proxy".

--

> > * Open TCP and UDP sockets

> There are websockets, which aren't the same, I'll admit. Frankly, I'm not sure I want my web pages to be able to arbitrarily open TCP/UDP sockets. (I don't really want my native apps to be able to on whim, either…)

Sure, whatever, you don't want arbitrary native apps being able to open arbitrary TCP/UDP sockets, but you're missing the point. We both agree that it should be possible to, somehow, download an FTP client and somehow configure security settings so that it can work, right? (Restricted ports and filesystem or chroot or whatever.) There's no fundamental reason browsers can't have the same security framework so that you can open an FTP client web app and somehow configure browser security settings so that it can work, right?

--

> > * Barometric sensors

> You mean like current air pressure? Are there common devices out there with these? (None of my computing devices have anything like this, for example.)

According to XKCD, "a lot of new Android phones" have a barometer sensitive enough to "actually see the pressure difference between your head and your feet."

https://what-if.xkcd.com/64/

(Obviously that's not an authoritative reference, but I think it's sufficiently addresses your question, and it's a fun article.)

> (and what would I want this for?)

What a great question, I'd love to know too!!


What you characterize as an "arbitrary technical limitation" is there for a good reason, unfortunately: even anonymous requests made by the _browser_ are not equivalent to anonymous requests made by the _server_ the web page came from.

That's because the browser and the server have different routing tables. Or to put is more simply, the browser can see the stuff on your LAN, behind your firewall, while the server can't. The point of CORS even for anonymous requests is to prevent web pages from being able to exfiltrate stuff out from behind firewalls, so you don't have random data leaks just because someone in an organization visited some random website. (Yes, you also need them to not download and run random binaries, not have any unpatched exploits in their browser and OS, etc... security is hard.)

Now obviously native apps _can_ exfiltrate stuff out from behind firewalls. Which means that if you really care about your LAN's security you have to assume or enforce that none of the phones connecting to it have any sketchy apps installed...

Note that there is in fact discussion about having a concept of "installing" a web app that you trust sufficiently, which would relax the browser security sandbox such that you get more of the capabilities you're talking about here.


You're right, there's a really good point here that I didn't acknowledge. The way I think of it is that the browser's security model, while not particularly secure from an academic or theoretical standpoint, is extremely widely deployed, widely studied, and well-understood, and in that very pragmatic sense is very well-trusted. Hence, every little thing that affects the security surface area must be exquisitely scrutinized, necessarily resulting in slow advancement in any such features.


IMHO the true point of a web app is to replace the native app and kill the need for developing every app twice (Android + iOS), as well as moving toward the vision of putting ALL apps "in the cloud" and fully OS-independent.

In order for this to happen, those web apps need to have every capability a native app can have, including access to the local area network, if you want to do "things that native apps do". What if I want to write an HTML5 SSH client so that I don't have to rewrite it twice for iOS/Android? An SMB client? A custom streaming protocol? An HTML Arduino IDE that can flash code over the LAN?

Ideally, a web app could simply have a manifest of permissions, present this list to the user at an appropriate time, and the user decide whether or not to grant them. Opening arbitrary TCP sockets should also be a permission, just like native apps need to add this to their manifest. Native apps already ask for permissions when installed; web apps should be required to do the same, and in return, be granted the same set of capabilities.

The idea that "web apps should be restricted" is at odds with the vision of using web apps to realize a true cross-platform app programming ecosystem.


Yes, this is why people are talking about having "installable" web apps. It sounds like you clearly agree that some sort of explicit user opt-in is needed to get out of the sandbox, which is good; I've seen some people argue that this is too high a bar.

Now it might happen that this will effectively happen as a result of convergence and standardization of browser extension APIs. It'll be interesting to see what happens there.

That said, the manifest of permissions idea is fundamentally broken in at least two related ways:

1) Most users have no rational basis on which to decide whether to grant the permission or not. In fact, I would argue this is all users, due to issue #2:

2) The permissions are either overbroad, such that pretty much any app that wants to do anything interesting asks for them (this is the "all my apps on want to be able to read my contacts and browsing history" syndrome) or there are so many of them that the list becomes way longer than any user would ever read at install time.

You example of a permission for opening arbitrary TCP sockets is an excellent case in point. As a standalone permission, what fraction of users do you think would understand what is actually being granted? Even if you restrict to computer professionals, I suspect that most would not understand the implications without a somewhat lengthy explanation of what the permission actually allows. And even then, in practice it would be rolled into some other more broad permission that would likewise not indicate the security implications of granting it.

http://robert.ocallahan.org/2011/06/permissions-for-web-appl... has some more in-depth discussion of the issues with this model.

> Native apps already ask for permissions when installed

Yes, and in practice everyone just grants whatever permissions are asked for without really even reading.


Of course you don't want web pages opening TCP and UDP sockets. The web isn't for writing apps it's for publicly editable documents with hyperlinks.


> Scroll like a native app without Safari's stupid rubber band scrolling the entire app at very erratic times

This is defeatable, either with `-webkit-overflow-scrolling: touch;` or other techniques.

> Background geolocation, background anything really

You mean, Service Workers?

> Display long lists of styled content and scroll without stalling

I could be wrong, but I think this is a Safari-specific concern. Android Chrome has never had this problem for me. In older versions of iOS, Safari was not able to execute Javascript while scrolling which could be perceived as "stalled" rendering. I'm not sure if that's what you mean, but Apple has been gradually working to allow JS execution during scroll. But they're so behind (in years) that I wonder if there are still some cases where the execution can't keep up with the scroll.


The web is crap at long lists in general. Try to build something like Mail on iOS or OS X, which shows you all of your email in a big list: the best you can do is infinite-scroll solutions like Yahoo! Mail, which are horrible on every browser.


I've definitely had problems with this. There's a lot of bloat that comes with the DOM, especially if you use a framework like Polymer or Angular. Even something as simple as a material-design text entry field requires a horrific amount of CSS bloat to accomplish internally, and if you multiply that by, say, 1000, you're in for trouble.


Perspective is important :) Putting 1000 material design inputs on a page isn't something a sane web app would do. The same is true for a native app.

Re. "bloat" of components. I think it's important to remember everything something like paper-input is doing for _you_. As a developer, I no longer have to think about: validation, animations, a11y, keyboard, knowing all the MD spec configurations, labels, underlines, colors, fonts, typography, margins, composability, x-browser interop,... Sure, if you were implementing your own input, you could cut corners and/or leave out what you didn't need. However, if you were creating a highly reusable, highly configurable element, you'd be implementing the same amount of "bloat" yourself.


> Putting 1000 material design inputs on a page isn't something a sane web app would do.

Why not? What If I want to make a material design spreadsheet app? Just a random idea, but again, the people who make basic OS-style UI elements shouldn't be judging what a "sane" app should and shouldn't do. This limits creativity.

> However, if you were creating a highly reusable, highly configurable element, you'd be implementing the same amount of "bloat" yourself.

Sure, if I implemented it in JS. Implemented natively, a MD paper-input consumes very little resources because it isn't a nested div hell with CSS and a shadow DOM.

What would be more awesome is if there were a way to, using HTML and JS, "ask" the browser to call and insert a native element and be able to interact with it. On iOS, it should be an Apple-styled element, and on Android 5.x, it should be a MD element. They should be the genuine native element inserted right in place, not a DOM hell designed to "look" like the native thing.


I would conjecture that any app that's doing thousands of inputs will probably use virtualized lists so performance is predictable. And you can do that with JS too :D.


Polymer and Angular 1.x do come with a lot of bloat. I wouldn't call a MD text entry field "simple". Material Design was released ahead of web technologies really being able to manage it well, at least on mobile.


Polymer itself is 47KB. If you add the web component polyfills (16KB), that's 63KB minified and gzipped. I tend not to consider the polyfills because they're not Polymer and are a stop-gap that goes away over time.


* Offline mode:

If you have large amounts of structured data and high performance requirements for it... well, good luck.


I think this is a well thought out list. I find it saddening that the 'health' of the web has deteriorated to a point where it's arguably uncompetitive for a vast number of modern applications including information discovery. I find it troubling that information discovery is moving away from the open web into non-neutral native application environments such as Facebook, Snapchat, and Twitter. Sure, information discovery on the web was driven by search engines. But I think search engines are really just part and parcel of the web (and arguably a feature that ought to have been built into web browsers from the beginning). A search engine doesn't express bias towards any information source (at least not by principle).

What troubles me is imagining a future where a vast majority of information that we consume is selected and curated by commercially driven, black box, non-neutral platforms.

I read an article recently about how Google is experimenting with removing the need to download apps by 'streaming' app content through a search box. Presumably, this is how Google search stays relevant in the post-web era where more and more information flows through walled gardens installed on mobile devices. Now, instead of re-inventing the web, shouldn't we be working towards 'fixing' the one that exists? This list provides a decent 10,000 ft overview of the problems.

Eager to hear your thoughts.


This is really awesome :)

The truth is that the 'web' is really becoming the cross-platform application architecture.

Just a few years back I despised the very idea of this -- I want to program in my favorite language, not JavaScript (no matter how nice it is these days). I'd love to write applications that can act like native ones while also being cross platform (and use multiple threads at that!).

The fact that WebAssembly[1] intends to eventually add support for multi-threaded programs and languages aside from JavaScript (including GC'd ones) is amazing. Personally I believe these are the two biggest blockers for most people who would want to write an app using web technology.

Although they explicitly mention that their goal is not to replace JS in their FAQ, that IMO is more akin to a _"nothing will ever replace C"_ statement rather than a _"no apps will be written using other languages than C"_.

[1] https://github.com/WebAssembly


The truth is that web is getting deeper and deeper into identity crisis. And if any platform is further away from having coherent architecture, it is web.


Exactly the same could have been said 15 years ago, and yet I'd argue it is the platform which has advanced the most in terms of both use and technology, and all this in spite of great standardization difficulties that other platforms didn't have to deal with.

I expect the same to happen in the future on both desktop and mobile platforms, despite the FUD.


It certainly has advanced the most, but that's a misleading presentation. It has advanced the most technologically only relative to its own starting position, i.e. it spent all this time catching up from blank state to where everyone else already was (and still is).


The web does things those other platforms don't. Like run unsigned applications from any server ephemerally on any machine or system.

It's not playing catch up so much as adding powerful features without sacrificing it's core attributes.


Yes, and yes. The things it can't do are a pain, but the things that it can do are things that are very difficult to do in a standardized way with desktop/mobile. A cross-platform UI that simply works on just about any device was considered the holy grail of computing at one point. Everyone gave up on it after a while, and very quietly the web just "became" the platform that enabled such a UI, without limiting how such a UI can be created.


The web has problems, for sure, but it's still way more unified than apps, which have two major, largely incompatible, ecosystems: Android and iOS.


I really wish there's a way to check what other popular browsers can do without having to install them and open up the website.


Do you know about http://caniuse.com/ ?


I like the concept, but I'm with you r3bl, for it to really be useful it needs to be more explicit –at a glance– about which browsers support which features. For example, more like https://kangax.github.io/compat-table/es6/


Just click in on any of the features and you'll get details including the full cross-browser support.


This only works for some of them.


Proximity sensor, ambient light sensor, vibration, access to contacts, change screen orientation... gee, I didn't know Firefox could do all that on my laptop!


Browsers are great at rendering.

But they are terrible when it comes to exposing the raw power and capability of a machine to web apps that the user wants to trust. They spec their API implementations by committee (and committees of committees) and they rarely implement any spec in its entirety. The web is becoming increasingly fragmented as a result.

The web is good for "web pages", but bad for "web apps". The web currently has no concept of a trusted web app.

For example, we have been waiting for years for browsers to give web apps some way to access the filesystem, and all we have is an open dialog and a file instance (and the debris of the failed filesystem api). And when will web apps (not "web pages") get TCP or UDP? The browsers will never be able to match the module ecosystem and core power of Node.

The way forward:

1. Give the power back to users. Give them a boolean way to indicate that they trust and want to install a web app.

2. If the web app is trusted and installed, give it access to Node.

In a matter of months, this could exponentially boost web apps, lighten the bloated browser codebase, keep the focus on browser rendering, and keep committee fingers off web app innovation.


> The web is good for "web pages"

That is, after all, what it was designed for. Making web pages interactive in any way beyond a <form> has always been a hack.

> give web apps some way to access the filesystem

> And when will web apps (not "web pages") get TCP or UDP?

First, if it's in a browser, there isn't much of a distinction between "web apps" and "web pages". The browser, by definition, exists as a sandbox that renders unsafe data and code. Allowing any kind of access to the filesystem or TCP or UDP will hopefully never happen.

All networked software needs to implementing security concerns as the first and highest priority, If you aren't, you're putting the people that use your software at risk.

The browser must always[1] be limited in what it an do. If you want to do more (which is fine), write a standalone application. If your favorite platform doesn't let you write such applications, complain to the vendor or change to something that isn't hostile to software development.

> the module ecosystem and core power of Node.

That ecosystem is tiny compared to what is available in /usr/lib64/. I like Javascript, but there is a lot more to computing outside that single ecosystem.

[1] Unfortunately current browsers are already over the line with what they allow pages to access.


Great site - quick usability notes:

- Left align all features

- Align icons

- Align check marks

- Consider using heavier-looking icons for supported/not-supported.

- Promote the checkmark / ex legend to the top of the page.


There's some issues with the code. navigator.contacts for instance, is not supported in my browser (or any browser at all, except for the most recent mobile firefox - I'm on a desktop) yet the website says that indeed, my desktop browser supports it. Same is true with navigator.getBattery() which supposedly will run fine in my console (it doesn't).


What's the framework powering the site itself? I'd assume Bootstrap, but it has a Material Design look to it.


It's on GitHub[0], looks like Bootstrap with a material design addon.

[0] https://github.com/NOtherDev/whatwebcando


I'm very surprised to see this claim that Safari on OS X[1] can't handle Push Notifications, because it's been able to do that for a few years now. It's just not using Service Workers. It uses a separate solution built around the Apple Push Notification service. Granted, this isn't cross-platform and isn't a W3C standard, but it is a capability that can be used today. But of course the whole section on "Push Notifications" is actually just a description of Service Workers.

[1] Which isn't actually a listed browser, but it has the X next to it when viewed in Safari and the underlying caniuse.com data is for Service Workers, which Safari doesn't support.


This website is about implementation of W3C standards, from what I grasped.

So it is not surprised that it doesn't claim Safari can handle push notifications. As it cannot.


No it's not. It's about HTML5 device APIs.

> Can I rely on the Web Platform features to build my app? An overview of the device integration HTML5 APIs

It's just relying on data from caniuse.com about W3C standards, but the page itself does not say anywhere that I can see that it's ignoring non-W3C browser-specific APIs. Which is rather misleading.


If I'm not mistaken on iOS you can only do it if you wrap you r web app into a native app. On OS X however you can actually get real push notifications even for websites.


I think this should be called "What the web can do someday." I mean, push notifications? If it's not supported by mobile safari, it's a non-starter for actual widespread adoption.


The ticks/crosses on the front page are related to "your current browser" (see key). Click through for detailed stats from caniuse.com.


While this will affect the 44% of users that are locked into Mobile Safari IE6-style, there's no reason not to start adding the features supported by the open mobile platforms and desktop OSes to your web app.


I've been imagining a website lately that displays progress in a similar way for all kinds of things. Sort of like a global to-do list. Should be fun to build.


This is pretty great! Looking forward to more service worker support for more offline and push notifications.


What does the X axis in the "browser support" graphs mean? It seems to go up to 30, and has fractions along the way, both of which suggest it isn't a simple version number. A percentage of something, perhaps?

edit: I guess it's market share? but which measurement?


If you hover over the bars, the "Global Market Share" numbers seem to match up so I guess it's that.


It looks like version number, which seems like a really dumb metric to put on the X axis.


If it's a version number, it seems out of date - Firefox is at 42, Chrome at 46, but the axis only goes up to 30. That's why I suspect it's market share, but that's just a guess.


It's interesting to compare the site opened in Chrome vs. Safari. I count 12 features desktop Chrome supports that desktop Safari does not (on El Capitan). A mobile comparison would be intriguing as well.


Excellent choice of colour and tasteful use of animations =)


I just want to know what the web can not do today,because I think the web can do everything today.


what a time to be alive!


You can't be serious.


I'm a troll and even I find you unbearable!


Still, people think the web is going to die and native apps are going to rule.


some people think the web is going to die and native apps are going to rule. I'm not one of them. I'm guessing the "web based apps are crap" agenda is important to those who stand to lose from less people developing native apps (less chance for platform lock in, less projects for native app developers etc. ), and from those who used web based apps developed by not so capable developers.

That does not mean that web based apps will (or should) replace native apps. There will always be enough applications where native will do the work better, even as processing power grows, but to say the web is going to die is just wishful thinking on the part of those who want it to die. Ain't gonna happen.


>Behave Like A Native App

The keyword is "like".


The Web Platform will be always criticised and that's a good sign. So many eyes on it are improving it much faster than other platforms.


One of the things the web can do, apparently, is animated transitions with very poor frame rates :-/


Good as a quick reference for the cross-browser state of HTML5 in mobile devices, but highly misleading for building mobile apps with web technologies, because if you plan to build an APP for mobile, chances are you'll end using Crosswalk/Chromium.

https://crosswalk-project.org/


What can Go do that Python cannot? Nothing. Why are there droves of projects that are migrated from Python to Go? Because Go is fast.

Web apps are too slow on mobile.


what does python or go have to do with this web page? what does go have to do with making web apps faster?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: