Hacker News new | past | comments | ask | show | jobs | submit login
A clean start for the web (2020) (macwright.com)
290 points by dannyow on May 13, 2022 | hide | past | favorite | 195 comments



What we need is for HTML to get going as a hypermedia again, to make the hypermedia architecture viable for a larger set of web applications. It's been stalled at anchors and forms (with only GET and POST!) for decades now. It's astounding how much we got built with just that.

I'm trying to show where it could go with htmx:

https://htmx.org

Hypermedia (in particular the uniform interface) is what made the web special and there's no reason HTML can't be improved as a hypermedia along the lines of htmx/unpoly/etc.

We don't need something new. We need to take the original concepts of the web (REST/HATEOAS/Hypermedia) seriously and push the folks in charge of HTML to do the same. Sure, browsers as install-free RPC client hosts works and is occasionally called for, but there is a huge gray area between "applications" and "documents", just waiting for a more expressive hypermedia.


This. This gets us 98% of the way. The people that have to worry about the 2% are not you, because your web app almost certainly doesn’t need offline-first of FPS-like user interaction, so in 98% of cases, hypermedia + tech like htmx gets you 100% of the way.

For pete’s sake, just let your server serve html. Replace only the parts of the page that need to update… with html. From your server. The poor thing is just sitting there, idling, bored, borderline depressed because it doesn’t get much of chance to do what it is meant to do, which is, you know… serve. “But it serves JSON” … is like buying a Bugatti Veyron, just to park it in your garage and use its exhaust fumes to dry your clothes 6 days of the week, with a 2 minute drive around the block on Sundays.


"“But it serves JSON” … is like buying a Bugatti Veyron, just to park it in your garage and use its exhaust fumes to dry your clothes 6 days of the week, with a 2 minute drive around the block on Sundays."

That is a very creative analogy, but not one I agree with.

My servers should do as little as possible. They should not serve files just for the sake of it, because they have all these shiny serving features.

And I like JSON as it is not as verbose as html and I can directly work with it in javascript. And then update my html where I need it locally.

If you prefer it differently, than this is fine. The web supports many ways.


> My servers should do as little as possible.

Why? It’s not like they wear and tear?


One advantage of pushing the computing on the user-side is that it reduces the load on the server: less requests, less processing and smaller request sizes. Thus it reduces the bill for the developers/companies :)


The server load from a server-side rendered page vs an API call that returns JSON is... very minimal.

Unless you're building at great scale, it's such a negligible difference that it really shouldn't be part of your decision making process, IMHO.


[Edit: I don't want to get into this argument right now]


the original web architecture has sophisticated client-side caching built into it, all modern browsers support it:

https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching

the difference between constructing a string of JSON and a string of HTML is a round off error when compared with network connection costs and data store accesses, etc.


I'm guessing you haven't measured it, as IME it certainly is true. Try it, you might be surprised.

Also, SSR'd pages can be cached too.


"The server load from a server-side rendered page vs an API call that returns JSON is... very minimal."

That greatly depends how you designed your architecture and if you have just a simple webpage or a sophisticated webapp displaying data in various ways.

My main app is designed to work mainly offline, with bursts of data transfer. All the state is local. Sending html from the server would mean in my case, the server would need to know and keep track of all the user states and data.


>One advantage of pushing the computing on the user-side is that it reduces the load on the server:

But we don't need complicated frameworks on the server, keeping track of changes in a shadow DOM and updating the real DOM afterwards.

The server will likely just use some html templates in which it will replace some variables with strings and numbers. Easy peasy, both for the server and for the developer.


Since all the clients have to do the work, doesn't that result in a net increase in electricity consumption? Since most electricity comes from fossil fuels doesn't that in net hurt our battle against climate change?


Servers are probably a bit more energy efficient than most consumer devices, but in general it does not matter if you calculate it at A or at B. Somewhere it needs to be processed.

You were maybe thinking of processing once and then push it to thousands of clients? Those scenarios exists, but are the exception and not the topic here.


> Those scenarios exists, but are the exception

More like, they are the rule.


So by spending thousands of dollars on developer time we can save dozens of dollars on server costs? ;)


The developer time isn't higher than if it was server side rendered. And for some applications, like the ones I'm building, it would be impossible to do the work on the server.


I wouldn't say that it increases developer time. From my (sure, limited) experience of building both with Flask (Django-like, Python BE framework) and Svelte, the difference between developer time is negligable, if not in favor of building on the client - the component frameworks bring much more to the table, like scoped CSS, often better editor support for dynamic values in the HTML templates etc.


It is all a question, of how big you want to scale it.


Stack Overflow runs SSR with a half a dozen servers. How much more scale do you need?


A bit more, if the servers also have to handle media.


Eeeeeeexactly


> For pete’s sake, just let your server serve html

You do realize this means all the user data has to be sent to the cloud? Which is what we want to escape? Hypermedia was created for bots for discoverability, I don't know why anyone would think this is an user-centric concept.


HTMX is fantastic. Today I built a proof of concept in Django+HTMX+D3.js. Fully reactive, interactive, HTML-first widgets with no npm nor "build" stage. Do anything with good ole HTML templates and just HTMX and D3!


I’d love to see the demo! I’m in the process of building something similar


Htmx is very well designed library and I‘ve been using it more and more. It’s great to have it in my toolbox.

The reason it, and the many other cool libraries can exist is borne out of the strength of the web, namely a solid, accessible foundation of protocols and formats with extensibility via JavaScript.

This last part is incredibly important, because it gives us enough power to create and explore new things, while the web standards can conservatively adopt new capabilities.

The web is working as intended, and while there are many issues and lacking areas (which htmx illustrates beautifully), we can take a step back and appreciate how amazing and empowering a platform it is.


agreed, fielding specifically mentions scripting as an optional constraint in section 5.1.5 of his dissertation, "Code on Demand":

https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc...

and it has proven to be an incredibly flexible aspect of the web

however, I would note, currently that scripting ability is being used mainly to replace the original hypermedia model of the web (with RPC-style JSON-based applications) rather than enhance it as a hypermedia

we'll see if we can change that


I've always been shocked that HTML forms only support GET and POST and there's zero interest in supporting more HTTP methods in forms even in 2022. If forms supported the whole range of HTTP methods we could have a user facing web that's identical to a REST API, with different Accept and Content-type headers. Imagine how much development time we could save if forms supported DELETE and PUT!


Does that save us much time? In the systems I've worked on, the problems of deletions and edits aren't HTTP verbs, they're wild, complex business rules around authentication, authorization, auditing, etc.


it's incredible that we got WebGL, Web Assembly and a million other things that are neat, but not really crucial to a proper hypermedia implementation, but they can't give us DELETE, PUT and PATCH in HTML, even though they are sitting there in HTTP just staring us in the face!

the irony is that DELETE, PUT and PATCH are used today almost exclusively in non-hypermedia JSON data APIs

crazy world!


That's pretty silly. This is like claiming the world would be fundamentally different if we used <em> instead of <b>. Anyone who thinks this is remotely useful is already doing it by either using ajax or by just overloading POST. We do not save development time by renaming a concept.


> Imagine how much development time we could save if forms supported DELETE and PUT!

not much; while you need a little (easily reusable) JS code to use and handle responses to other methods, it's such a tiny fraction of the dev effort that goes into web apps that it makes no meaningful difference.

It would be nice if HTML forms supported other methods for completeness, but it wouldn't save all that much real-world development effort.


The W3C attempted to improve HTML forms with xForms, but the browser vendors are deathly afraid of anything XML based so it never got implemented.

These days HTML is really only viewed as a payload carrier for your JS application so doubt that'll ever change either.


You know you can update or delete a resource by using GET or POST?


You can, but it is not the proper way to work HTTP.


Am I the only one who is not excited by this new web idea ? As a web developer, we have to deal with framework and library major reinventions every couple of years and now you are telling me that you are going to reinvent browser rendering engines that we need to develop and test and then also your crappy reinventions . Thanks but no thanks . Please just build on top of giants . Not reinvent everything all the time .


New idea? Hypermedia is from 1965, and has been a thing since 1991; HTML. recursivedoubts HTMX library is cutting edge 2005 technology.

To add to both of your arguments, "JS-routers" like SvelteKit/NuxtJS/NextJS are literally reinventing server side rendering for the client to then call the actual server to get data ... to render HTML.

HTMX, LiveView, Livewire, Hotwire etc are escape hatches back to sanity.


> SvelteKit/NuxtJS/NextJS are literally reinventing server side rendering for the client to then call the actual server to get data ... to render HTML.

It's all done in one call - at least in Svelte. You can even render all this into a fully static site.

Meanwhile htmx and the like is the same idea that was popular 15-ish years ago, which died for good reasons.


the reason it, that is the idea of hypermedia, died was because you weren't able to achieve the same level of interactivity as you could with javascript-based applications, even back in 2005

this was due to the fact that HTML stopped advancing as a hypermedia, as I say in my original comment

htmx and other libraries are attempting to address that by pushing HTML forward as a hypermedia, which allows you to implement more sophisticated user interfaces purely in hypermedia terms:

https://htmx.org/examples/

the hypermedia idea itself is extremely innovative and interesting and, at some level, today's javascript applications are new iterations of an even older idea: client-server based RPC applications, as we built back in the 1980s


>the reason it, that is the idea of hypermedia, died was because you weren't able to achieve the same level of interactivity as you could with javascript-based applications, even back in 2005

But I neither want nor need much interactivity in Web sites. Interactivity is for applications and applications are much better on desktop or mobile than on the Web.


> HTML stopped advancing as a hypermedia

Nitpick: "media" is plural. The singular is "medium".


Curious about those good reasons. Care to elaborate ?


The chief reason was organisational - you needed full-stack developers for just about anything and they had to specialize in your stack - especially the language used on the backend.

While there's no shortage of frameworks on the frontend, it's all still JS/TS, so everyone is using the same idioms.

Other problems:

1. 1:1 mapping of endpoints to presentation. Common cases where this blows up:

A. A list of items which looks different depending on the context or has different styles of presentation switchable via buttons.

B. Two or more different data sources, combined in different ways.

Now you need endpoints for each context(A) or combination(B), which makes a mess in your cache. You also send way more data this way - especially if your users tend to fidget(A).

2. Inherently slow. You either replace whole parts of the DOM - which triggers a massive reflow or you make smaller changes by comparing new and old HTML in which case you need to parse both versions and deduce what changed and how.

Unfortunately since this is all just HTML you don't have object references to the data that was used, so you can't employ some of the neat performance tricks modern frameworks use, like detecting a row swap.

3. HTML elements with state, e.g. a canvas, video/audio, file upload input or even a textarea with a selection. You have to store this state somewhere, but sometimes (like with a "tainted" canvas) you can't access it at all due to security reasons, so now you have to cut around it when you're updating.

Selection is especially a piece of work, because browser implementations differ significantly to this day.


Maybe what was a "good reason" 15 years ago isn't anymore, namely the lack in those days of WebSockets and, if I'm correct, widespread Server Sent Events?


I don't think the reasons were good


I have been reading a lot about HTMX and planning to develop an app in it on top of a JSON API. With that being said do you know if any open source apps that do authentication on top of a JSON API using HTMX? I’m having trouble conceptualizing the structure and flow of data from just reading the docs. Primarily around maintaining state between pages. (Looking to use JWT for authentication.)


htmx is designed to work with a HTML API. Everyone here seems to love it, I found it forces terribly complicated routing and templating and IMHO projects grow towards unmaintainable very quickly. I'm saying this as a person who really tried hard to like it.

You can perhaps use Alpine.js plus sprinkle htmx here and there, to have the best of both worlds, at least on paper. Didn't try this myself.


i'm old enough to remember when everyone here hated it ;)


I used intercooler.js (the predecessor of htmlx) a lot to sprinkle interactivity on top of static pages. Back then I remember the attidute was positive for it too. Not for building proper web apps though...


HTMX has been one of the best libraries I've used to date. Love it to bits.


:) glad to hear you are finding it useful


Instead we get Web Components, where HTML is an afterthought. Only years later are we getting things like Declarative Shadow DOM. You’re always expected to write custom JS for every single component on your page, instead of doing something like “add this functionality whenever you see the `foo` attribute on any HTML element”


There's also ergonomics voids in the document approach. Humans don't just use documents, they do higher order things.


I personally would love a "document web" and an "application web." I've thought of this before independently and talked about it to some people, I think the author is right, the idea of one client application for everything internet related is a core source for the problems we face with the web.

I like Gemini, a lot, and I think it could serve as a "document web" very well, except it's missing certain document features like italics, bold and superscript. If you want a document format that serves the average user's needs, you need those features, period. With superscript you don't need inline links for citations.

An "application web" could just be a UI framework that renders an easy to write and read markup of some kind and fetches it over an internet protocol.

The problem with the distinction is it doesn't solve another core source of the problem, specifically, the reason document delivering websites send applications instead is that they want to track users and serve targeted ads. What's to stop CNN from just sending a message in your document web browser saying "please use your application web browser to view this page"? And that's precisely what they'll do, and then you're in the same boat we are in right now, minus the complexity possibly, and nobody will use the document web at all. How is this problem solved?


Why does it have to be so siloed? What if I want documents with a slight bit of inactivity? Why must I go through two different protocols? What if I want to read a paper next to the Juyter notebook that made it? I think it should be the clients job to take whatever slice of a richer universe it wants. This also stops duplication of effort, why maintain separate protocols and clients etc. It’s just a search issue, there are plenty of text websites out there. Protocols like Gemini I think are a waste of time.


The problem with "one client to rule them all" is a massive lost of consistency, speed/responsiveness, and usability when it comes to documents. On the web, documents can come in anything ranging from the form of a tiny text file to a gargantuan "app" that weighs tens of megabytes and spins up your laptop's fans to render. It also means that the browser isn't nearly as smart as it could be because it has to be a general purpose jack of all trades.

A dedicated document browser would likely almost always blazing fast, regardless of the machine it was run on and the network over which the documents were transferred over. It could have features that would make no sense in a web browser but greatly enhance the experience of reading and navigation. It could reasonably cache nearly everything the user visits (since it's practically all read-only and small) to reduce server load and increase speed. In a nutshell, it could be far better at specifically working with documents than a web browser ever could, even with a laundry list of extensions installed.

Additionally, it would actually be possible to write brand new competing document browsers due to the vastly more simple specification, which is something the web will probably never have again.


I find this compelling, but the lines feel blurry to me. For example, take Hacker News. When I read hacker news, it's both static and simple. But, if I want to make a comment, suddenly I need authentication and a text box. So, does all of Hacker News have to be in an application web to support comments? Do I have to use two different applications to read or submit comments?

Next level of complexity might be form entry. Say I want to buy tickets for a conference, and need to submit my credit card information, phone number, address, etc. We already have authorization from HN, but are the credit card and phone number boxes just plain text boxes? No validation or interactive feedback until I POST? When I enter my address, do I get a map of where that is so that I don't confuse N xx-th St NYC with xx-th St NYC, one of which is in Brooklyn and one of which is in Manhattan?

On the flip side in the application web, when I'm doing my banking. If you go all the way to pixels, how does my accessibility reader handle it?


The problem with one client is that the client is made by a corporation whose interests are not aligned well with the interests of the users and of other companies.

One company owning the Web and practically dictating standards, what users can do or not do on web, it's pretty damn bad.


> What if I want documents with a slight bit of inactivity?

Then you use Adobe Flash. See, we used to have a decent technology for when you want "a document with a slight bit of interactivity". It worked very well for this exact purpose. More than 10 years later, browsers' native capabilities, that are supposed to be a replacement for what Flash offered, have still not quite caught up. Moreover, Flash defined a clear boundary between the document and the application parts.

Did the particular (Adobe's proprietary) implementation of Flash player suck? Yes, sure. Could this have been done better? Yes, sure. Is it possible to reimplement a Flash player from scratch within a reasonable timeframe with a small team of developers? Yes, sure, Ruffle[1] is a thing, and it's being actively developed.

I miss Flash. I hope it makes a comeback eventually.

[1] https://ruffle.rs


This sort of "wouldnt it be great to have some other thing, so we can murder the rest of the web platform" is so grotesque to me, just repugnant & vile & cruel.

Flash is a comedically shit technology. There's a couple animation capabilities it had that were nice & simple & people adore it endlessly for that. But Flash hated users, gave them nothing, no power (alike a native app) & was a big proprietary anti-hypermedia ball of jank, and it had the most trashfire craptastic lack of accessibility one could ever imagine.

This desire to keep hypermedia down, to create hard walls between different purposes of apps... it feels poisonous & misguided. There is too much of the web today thay does abuse client side code too much. But frankly it rarely affects most people, there's still viable techniques for manipulating it (except for Flutter's CanvasKit which is an abomination more cursed than Flash), and honestly, we're still getting better at webdev, still learning, still emerging better libraries & best practices, & generally the urge to do better is here & slowly happening.

There's some really dark takes that we have to save the web from apps. That we should bring back Flash is one of the most unforutnate least justified most tragic hot takes I've heard though.


Your rant is focused so much on the actual implementation details of Flash that it misses the point that the abstract concept of a VM based application container/sandbox could have been a great thing. In fact, it was such a great idea that browser developers were forced into a pretty tough struggle to actually kill it in order to stay relevant themselves. Adobe's poor stewardship sealed the deal, not conceptual shortcomings.


Isn't that what we have? JavaScript is a VM-based sandbox. The difference is that JavaScript has more ready access to the DOM, but Flash was also able to do that, it just used JavaScript as a bridge.


The point is that you didn't need DOM. You had a canvas to draw things on and to handle clicks in. Now you need the DOM either way and it's cumbersome. It feels like trying to build a UI in a word processor. Text nodes are such a terrible idea...


Having a hypermedia standard is universes more interesting & powerful & enriching than what boils dowm to a way to generate gifs. Computers have dozens of ways to throw pixels at peoplecs faces and none of them are remarkable or particularly noteable.

Having a DOM, having structure, having a hypermedium is a source of immense potential & power. For both developer, but also, capitally & uniquely, for users too.

This longer top-commemt talks some to why documents, why hypermedia is enriching: https://news.ycombinator.com/item?id=31373988


>Having a DOM, having structure, having a hypermedium is a source of immense potential & power.

For documents, yes. For apps those are obstacles. Apps just need something to draw on.


Your statement doesnt resemble any gui toolkits Im familiar with. Not Flutter, not Swift, not Gnome, nothing. The DOM closely resembles other ui construction systems & has similar ideas lile event handling.

And then the pretense that developers are the only thing that matter. The web is vastly better than apps because of the dom, because our systems are in a malleable hypermedia that ysers can modify eith userscripts, userstyles, extensions, eith whatever manner of accessible browser/user agent they please.

It's such sad thing to hear people advocating so strongly for the winnowing od possibility. Leaving all responsibility & all power with the app developer sounds perverse to me, sick, unworthy of being called a part of the internet. Not only does "just need something to draw on" not reflect the needs of developers, it represents an mortal threat to user agency.


The very big difference between the web and a proper GUI toolkit is that in GUI toolkits, inline elements aren't a thing, period. Text nodes also aren't a thing and don't mess up your layout when you least expect it (argh). You need to explicitly create a "text view" or label of some sort to display text. GUI toolkits also usually offer layout algorithms that make sense for GUIs. Oh and did I mention that they have sane defaults like not insisting on using the intrinsic sizes of things unless you explicitly tell them to?

Another property GUI toolkits have that the web lacks is that the GUI toolkit is usually made of the exact same kind of code you're writing your app in. There's a much tighter integration between the app and the toolkit. The app can extend it much more sensibly by, for example, implementing custom layout algorithms. With the web, there's this issue of the browser implementing all the layout and drawing and doing all the work and you can only provide input parameters to these hard-wired algorithms — you can't build your own.

Yes, I'm aware of those layout and paint worklets being worked on. Yes, they solve some of these problems and thankfully move the web platform closer to feature-completeness. It's still a mess though.


There's a lot here I think doesnt have much merit i fact, is bias & predisposition, a false belief that only simpler/dumber/less is permissable. I disagree & see no basis in fact, no evidence for any of these claims to importance & difference, but I also think nearly no one can claim authority, has any real idea. I dont believe this expansive platform is to the detriment, but Im also capable of allowing for differing views, wrong though I think they be. Neither side has much to say for itself. The first claim being that inline rendering capabilities are actively harmful, that the web's capability is a point for the opposing side starts me off with extreme doubt, but in spite of my extreme doubt it's in an unresolved state. We need more & wider possobilities to assess; I dont see that recognition that we need to know more to decide in the web-detractors allowed-fors.

Where I really want to pull the emergency break & call for reassesment is when I hear promotion of turning usercs experineces into mere pixel pushing videos. I still cant begin to express enough what a colossal & masive civilizational downgrade it would be to reduce the web to moving pcitures, animated pixels, which again seems like ongoing chorus of this post. Layout can preserve textual information, but the idea that we just need to let devs paint whatever and fuck hypermedia, fuck structured information, fuck text: it's ballistic out of this world demonic. Truly fallen a proposition. There's so many people eho recognize for example Flutter & it's infernal CanvasKit as the enemy, as a thing no one else can read or understand but which the app can unmediatedly foist upon us. Instead of a shared medium where both sides have respect & powers, there's this ongoing deeply authoriarian undercurrent ehich has infected the world, thay says only the app's concermd matter. That hypermediums are irrelevant- 0% of importance is information & shared mediums, 100% of importance is the top down imposed experience. Users get no say, browsers get no say. To me, to many, this is as dark & bleak & sad a world as could be imagined, forsaking every gain & advance at interconnecting the world for a ludicrously totalitarian view of what software is for.


You still have the canvas, and it's perfectly easy to draw on that and handle events in JavaScript. There are a number of applications that work that way.

That said I disagree with the idea that using the DOM is bad - I suspect it's cumbersome to you mainly because you're not used to it. The number one benefit of the DOM is that user-facing controls are largely consistent. If I want the user to type text into a text box, I can just use a browser-defined element, and the user will be able to interact with that exactly as they expect. I don't need to reimplement the whole text input and display process, because it already exists in the platform. The same goes for all sorts of controls and interactions, from zooming to select boxes.

Moreover, assuming this DOM is built up in a way where the semantics are embedded in the elements themselves, then it can also be used in different ways. One user might use their browser to render the DOM into something that can see on their screen, while another might use their browser to read the element contents aloud. One person might use their mouse to interact with the system while another might use their keyboard. Accessibility is built into every application built using the DOM - it may not be ideally presented in some situations, and it's still possible to get things wrong, but by default a blind user will at least have a chance of using the system. In contrast, a canvas-based application will need to reimplement accessibility from the ground up (usually in the form of a secondary hidden DOM tree).


Exactly. When I first did a web front end it felt exactly like customising MS Word with VBA. No matter what you put on top of it the roots poke their heads through and it feels like a mess.


Yes, yes. Flash (with AS3) was a first-class web application platform. There were buttons and views and "movie clips" and other stuff for you to place on a blank canvas, all very customizable. Then there was Flex, a full-fledged yet relatively lightweight application framework built on that foundation. You created your layouts with XML, with live preview. There were Android-like layout containers (emulating Android's LinearLayout with HTML+CSS is still not quite trivial in 2022 — the defaults on flexbox are bonkers) and there were ListView/(UI|NS)CollectionView-style lists with reusable cells, among other things. This was 15 years ago! I can't even begin to imagine how many workarounds one would need to implement a list with reusable cells with HTML+JS even if targeting only the latest Chrome and Firefox.

HTML is a document format at its heart and there's no getting around that, but <object> is/was a nice escape hatch.


>Isn't that what we have? JavaScript is a VM-based sandbox.

JS is unfit for the task. It is a dynamic scripting language introduced by Netscape for the only purpose of adding a bit of interactivity to web pages, not to create large apps.

.NET and Java are much better for the task. The VMs support better languages with better libraries and more sane ecosystems. And the performance would be much better than that of JS.

And is easier to write an application using Xamarin or MAUI then using React.

When you write Java or .NET apps you write to target a computer which is a much better model for the apps then Web.

With "app languages" you can write apps better, faster while having better performance and more capabilities than using dynamic scripting languages like Javascript, Lua or Python.

Operating systems and browsers are apps too. And we don't use Javascript to write them.

You can write anything in any Turing complete language. But you'll be a fool if you attempt it for anything but fun.


I think what your describing is mainly your preference - which isn't in itself a bad thing, the lack of real diversity in browser scripting is a problem that is slowly being solved as WASM becomes a more viable target.

That said, some of your statements seem to be simply false. Javascript engines are at this point about as heavily optimised as Java and .NET runtimes, and a quick scan through a few benchmark sights seems to indicate that, with a few exceptions, the two perform similarly well. Certainly, there is no guarantee that a naïve implementation in Java will necessarily be faster than a similar one in Javascript, in the way one night expect with e.g. C and Python.

Moreover, JavaScript is regularly used in all sorts of applications and systems - Gnome, for example, uses JavaScript for various plugins and utilities, an increasing number of applications are written using Electron or other webview-based technologies, even browsers use JavaScript for a significant number of controls (e.g. the devtools for most browsers are written in JavaScript). So clearly a lot of people see value in writing all sorts of applications in JavaScript.

The rest of your complaints seem to be largely your opinion - again, perfectly valid, but other people will have different opinions, and so the value proposition for JavaScript will be different for them. For example, for me, building something in React is a cinch, whereas Xamarin would take a lot more work. And the ecosystem of JavaScript may not be perfect, but it's very well oriented towards building front-end apps, something that isn't as true of Java.


JavaScript VMs are inherently hampered by language design choices in JS. The dynamic typing and prototype based objects limit JIT based optimization for the full JS massively. Java and .NET have fully static typing which allows their JITs to be completely certain when they are translating code. JS JITs constantly have to add overhead in the form of escape hatches to deoptimized code if e.g. expectations about variable types are being broken at runtime.


Yes. The standard itself (the swf file format, the AVM bytecode, etc) is actually open. To their credit, Adobe has all the relevant specs on their website.


Cool, since you are here , is there any flash editor clone?

That was the stuff, animations, games, apps, all so intuitive.


None that I know of. Of course it would be nice to have an open-source editor as well, but an open-source player is much more important. You can get by with abandonware for the creation side of things, but you can't ask your site visitors to install abandonware that is known for an astonishing amount of scary vulnerabilities (if they use an OS that was supported in the first place).


https://www.wickeditor.com/#/ <-- I give these folks some money on Patreon because I feel like they're trying for the right thing



Looks cool! But I don't own nor plan to own a mac.


> I miss Flash. I hope it makes a comeback eventually.

Flash was great for developers, but terrible for users. The only good user experience was "flash games/animations" and even then they weren't responsive and a little clunky.

The Flash I remember was bloated "Flash" websites that had No HTML, No url paths, No SEO metadata, Terrible for accessibility.... but it was easy to develop for I guess...


Why not just use Java and .NET? They have pretty capable VMs which support lots of languages: Java, Python, Clojure, C#, F#, Visual Basic, Python, C++ and others.


>Why does it have to be so siloed? What if I want documents with a slight bit of inactivity?

You aren't allowed to have that, because it's not profitable for the web developer crowd's bosses. So your "slight bit of interactivity" becomes "several megabytes of surveillance and advertising".


What if we're rebelling and make a new Web? If it will be interested enough, useful enough, both developers and users will come.


I think it's point 2. In the article, that it shouldn't be compatible, as you'll end up having to implement all of chrome again.

And the anecdote about adding the tracking code for one thing, but all the other market segmentation information getting turned on afterwards because it can.

But yeah, I do want comments on my blog posts, and hn linking to docs.


So are you saying you just want the web as it is? If not, what would you change? How would you prevent it from becoming what it is now?

I don't see why a document couldn't have a link to an application that opened next to it, so that you could view documentation and an application together. We should utilize the organizational ui paradigms built into our operating system, not use the system as a launcher for Chrome.

The effort is being done whether its all in one application or split into two.

It has to be siloed because if it isn't, you won't be able to find the document you want to read without running application code, as we see with the modern web. It's not a search issue, I find the documents I want to read just fine, it's just that they come with megabytes of executable code I don't want to run.

It would be nice if a client could only implement the rich stuff that it wants to, unfortunately, when the rich stuff is all different everywhere, it means only clients that support all of it get used, hence the current state of web browser development.


The web is about fine as it is.

The changes we need are around people, improving liberal arts education; and ownership, having government run alternatives to most things like search, login, and payments


Where would https://ciechanow.ski/mechanical-watch/ fall? In the document web or the application web?


It can fall under any of these with minimal changes, so your question is not very meaningful. I'd say it is better suited for the application web though, because you expect interactivity. The point here is that there are not many documents that do necessarily need interactivity.


But that article is exactly what we want on the internet. It's the kind of interactive teaching demonstrations that forward looking educators have been thinking about for decades. It's not bloated or overdone; it doesn't take up significantly more processing power; it works far better than a traditional article ever could. This is what we desire the internet to be like.

It's because of articles like this one, or 3Blue1Brown videos, that I feel like the idea of "the document web" is really a step backwards: the internet is not just a glorified transmission method for paper anymore. We have computers, which are able to do so much more than paper ever could. To not exploit them for their capabilities, simply due to fear of exploitation, is far worse. How much worse do you think https://thebookofshaders.com/ would be to learn from if it couldn't include live editable demnostrations? To divide up the web into "interactive" and "non-interactive" content is to remove the ability for creators such as the author of the Mechanical Watch article to add progressive interactivity to their documents, remove the ability for learning resources to fully utilize the power of computers, to remove the ability for small fun things to be added to sites such as a Konami Code easter egg or https://bruno-simon.com/'s interactive website. And before you reply with something like "these would all go on the interactive web", my point is that dividing the web like this would not only be pointless but would also be ultimately harmful to the creation of things such as these.


Don't get me wrong, I like interactive documents a lot, but they also take a lot of time and effort to produce and thus will remain a minority. Texts greatly outweigh every other medium primarily because they are very easy to produce and distribute.


> they also take a lot of time and effort to produce

This is a case of bad tooling, not an intrinsic fact.

https://www.joshwcomeau.com/css/understanding-layout-algorit... is an article that could have easily been written without any interactive components. All the author did was replace traditional code blocks with an embed to a web playground (e.g. by s/Code/Playground/g in a MDX file). Ta-da, the demonstrations are immediately interactive.

> and thus will remain a minority.

Not if we put in effort to encourage them. On the other hand, if we put in effort to _discourage_ them (say, by splitting the web into documents without interaction and apps with interaction), they will disappear completely. It is not worth sacrificing these creations simply so that we can remove a few bad actors. (Mind you, there are many ways to act badly in non-interactive ways too. Image watermarks, embedded advertisements in videos, embedded advertisements in ASCII text...)


> All the author did was replace traditional code blocks with an embed to a web playground (e.g. by s/Code/Playground/g in a MDX file).

Well, not surprising given that the article was about the web technology (which powers the application web, of course!). I might have been convinced otherwise.

There is indeed some negative feedback loop going here. But I think the cost to achieve interactivity has been decreasing over the time (compare with, say, 1990s' interactive CD) and the gap is still very significant, even with a very favorable condition (nowadays you can practically have interactivity anywhere anytime!). I'm happy to be proven wrong, but for now I'm not optimistic about that.


Sounds like more of an indexing problem than anything else. Add a no-js switch to our search engines and you can read all the plain text your internal organs can handle.

I tend to like today's web (a bit too corporate for my taste, but I like the potential for variety). But there's no reason the modern web can't look like the old web with the right filters.


It's not just about finding just text to read, it's about finding the documents you want to read and not having to run arbitrary application code to read it.


> I like Gemini, a lot, and I think it could serve as a "document web"

A document web is dead in the water if it does not at least support the things that are common with physical documents. That means inline images and at least some control over layout.

Gemini throws out way more than what would be justified with the document/application distinction and as a result doesn't have a chance of meaningful adoption. Maybe that's OK for the people behind Gemini, but that still leaves the role of the "document web" for the rest of us.


The distinction can be maintained easily by limiting the application client in its ability to display fancy text. In fact, it should have very little support for styling and layouting should be restricted to some nested grid like dynamic system without pixel perfect control. Part of the predicament of the current web is the expectation that you get fancy corporate designs reproduced pixel perfect everywhere. Break that for applications and you will limit the viability of reader applications for content that should be served as documents.

Imagine a browser that throws out almost all CSS the moment it sees a form tag or a line of javascript.


I’ve been thinking about this as well. Something like an application markup language.

index.aml with an application object model. No idea what it would look like, but I’d love for html to just be allowed to be html.


> except it's missing certain document features like italics, bold and superscript

There are ANSI escape codes for these features. It's not missing.


Those don't work on every client.


you solve this by leaving an ad supported business model in favor of asking for donations.

Write a document website with a Patreon like gwern.net instead of some bloated ad-sponsored clickbait like BuzzFeed.


I think the web is mostly good the way it is and I don't want to see it re-invented. In fact, I hate Flutter because it's not "webby". It renderers all content and controls using Canvas which means all there is on the page is pixels which means no accessibility since there is no standard structure (the DOM) to dig through to find the content.

You might say there will be solutions like ideas to augment the canvas with meta data about what's in it but that IMO misses the point.

As a user I want to be in control of the data that's on my machine. With a standard like HTML, it allows me, as a user, far FAR more control then a native app. I can use userstyle sheets. I can write and/or install extensions that look through or manipulate the content. None of this is possible if all I get is a rectangle of pixels.

Translation and/or Language learning extensions would never have happened on native platforms because it's effectively impossible to peer into the app. Whereas it's super easy to peer into HTML based apps.

So, I like that most webapps work via HTML. I also like, that unlike most native UI frameworks, HTML has tons of relatively easy solutions for handling the differences between mobile and desktop. I've made several sites and web apps and while there may be a learning curve it's 2-3 orders of magnitude less work to get something working everywhere (desktop/tablet/mobile) with HTML than any other way of doing it.

Change to 2 modes, document mode, and app mode doesn't make sense to me. It'd be a net negative. People who want this I think want to take control away from the user. That's clearly Flutter's goal. All that canvas rendering means it's harder to select text, harder to copy and paste, impossible to augment via extension. It puts all the control in the app dev's hands and none in the user's.

So no, I don't want the web to be reinvented. Its provides something amazing that pretty much all attempts to replace it seems to missing. Its structure is a plus not a minus.


I have yet to see a suggestion to "reinvent" the web that expands on publisher or end-user capabilities, rather than taking them away. The end user doesn't want Geminispace, they don't hate that web publishers get to control layout and design, or that sites can be more complex than even the early web allowed. They don't want to write their own clients or stylesheets, they don't want the web to only be strictly static documents, with "apps" quarantined elsewhere. They don't want a different markup language. They don't want to destroy social media and force services not to use algorithmic feeds or discovery because it's "mind control." They don't want to kill all advertising and commerce on the web in the belief that content creators should be forced to work for love instead of money.

I mean if modern tech people had their way, the web would have never been anything but a bare data API on a blockchain, and no one without at least a bachelors' degree in CS or engineering would even know about it. And oh yeah, you'd need a license to publish anything.


> I mean if modern tech people had their way, the web would have never been anything but a bare data API on a blockchain, and no one without at least a bachelors' degree in CS or engineering would even know about it. And oh yeah, you'd need a license to publish anything.

They're looking for a technical solution to a social problem. They miss the Web as a space for people only like themselves. Having to share the web with normies who don't create out of _love_ or don't spend hours researching a small change like they want everyone to means folks different from them end up inhabiting the web. It's thinly veiled gatekeeping, a desire to make the web a space where only folks like them would inhabit.


No its not. I just want to read text based content without downloading google analytics + 20 other trackers, 5 java script libraries, 7 frameworks of some kind and 20 ads. It's inefficient. It drives obsolescence of hardware long before it is unusable (second only to gaming, but at least gaming has a user driven motivation, not abuse by tech megacorps).

Assigning a gatekeeping motivation to it is a nice strawman, but luckily not based in reality


> I just want to read text based content without downloading google analytics + 20 other trackers, 5 java script libraries, 7 frameworks of some kind and 20 ads. It's inefficient.

I agree. I have JavaScripts disabled, but it may still try to download CSS, pictures, and other stuff, and might try to hide the document, etc.


Then don't? If you want to recreate Gemini on the web, just use curl to download assets and consume them that way. This doesn't seem to be about you or the GP specifically consuming the Web this way, this seems like people who want to create a _network_ of other people who consume the Web this way, which is essentially the same as gatekeeping.


Not sure if either or both of you are lumping the article author in as “modern tech people.”

From the article:

> I think this combination would bring speed back, in a huge way. You could get a page on the screen in a fraction of the time of the web. The memory consumption could be tiny. It would be incredibly accessible, by default. You could make great-looking default stylesheets and share alternative user stylesheets. With dramatically limited scope, you could port it to all kinds of devices.

> And, maybe most importantly, what would website editing tools look like? They could be way simpler.

And:

> There are a lot of other ways to look at and solve this problem. I think it is a problem, for everyone except Google. The idea of a web browser being something we can comprehend, of a web page being something that more people can make, feels exciting to me.


On the other hand one should see, how the masses of normies, uneducated about how even a single web page works, keep consuming the wrong stuff, creating huge incentive for companies to ruin the web we have/had for the for the people, who value consent and freedom. Normies just keep consuming and enjoying away their own freedom and the freedom of others, by giving power in terms of money to the wrong people. We have seen it again and again with companies like FB, Netflix, Google and so on, which create a dystopia of not being in control, what is done with your data and spying on us everywhere we go on the Internet. The normies, who consume without worrying are empowering them.

So yes, it is some gate keeping, but maybe that gate keeping is justified, in order to protect our freedom.


> They don't want to kill all advertising and commerce on the web in the belief that content creators should be forced to work for love instead of money.

In the heyday of the web, people weren't doing anything for money and it created good content. Where is the good content now? A billion useless "Best $product to buy in $current_year" articles with 20 affiliate links to drown the web in SEO spam. Commercialization is a race to the bottom. Good art and literature was created by people who had an urge to create and not by people who had bills to pay.


The good places of the web still exist. Nobody is stopping anyone to participate and enjoy them.


If you pour one drop off piss into a pool of water, it becomes piss. If you pour one drop of water into a pool of piss, it does not become water.


> They don't want to kill all advertising and commerce on the web in the belief that content creators should be forced to work for love instead of money.

Real artists have day jobs.


> end users don't want to write their own stylesheets

They probably do, but not at today's complexity of CSS and not in constant defense of every website with atrocious over-design.


> The end user doesn't want Geminispace, they don't hate that web publishers get to control layout and design, or that sites can be more complex than even the early web allowed. They don't want to write their own clients or stylesheets, they don't want the web to only be strictly static documents, with "apps" quarantined elsewhere.

I do not agree. Many users do want it. The problem is that web browsers are not written for advanced users.

(Furthermore, there are other protocols for other things, such as IRC, NNTP, etc.)

> They don't want a different markup language.

The actual problem is that even if you use a different markup language, you cannot easily serve it and allow end user customizations to decide how to display it (possibly using a more efficient implementation than the HTML-based one), and you will be forced to serve HTML instead, making it more difficult to write an implementation that does support the other formats.

You could use <link rel="alternate"> to link to the source document, or you could have my idea of the "Interpreter" header, which also allows to polyfill picture/audio/video formats in addition to document formats.


>I do not agree. Many users do want it. The problem is that web browsers are not written for advanced users.

How many are 'many?' I'm not aware of anyone not a programmer or web developer who even cares about any of those things. Perhaps I should have been more precise and said "the average end user." I mean, even most people on HN who talk about Geminispace just complain about how restrictive it is. It's a niche within a niche within a niche.

And I think you're just proving my point here - all of these are problems that only a subset of tech people even have.


> And I think you're just proving my point here - all of these are problems that only a subset of tech people even have.

Yes, but can't you have many possible computer programs, that different users may prefer? Shouldn't you be able to make multiple kinds, whether a lot of users want it or only a few users? Unfortunately, the existing complexity and mess of WWW makes it difficult.


> As a user I want to be in control of the data that's on my machine. With a standard like HTML, it allows me, as a user, far FAR more control then a native app. I can use userstyle sheets. I can write and/or install extensions that look through or manipulate the content. None of this is possible if all I get is a rectangle of pixels.

Not only are you in a tiny minority of users who wish to do things like this, but most web applications out there go out of their way to stop you from doing those things and creators would almost certainly rather have a UI platform that did not allow you to do those things.


> Not only are you in a tiny minority of users who wish to do things like this, but most web applications out there go out of their way to stop you from doing those things and creators would almost certainly rather have a UI platform that did not allow you to do those things.

I am also one who wants to do things like this. Web browsers must be designed for advanced users who are assumed to know what they are doing better than the web page author.

It is worse. Even if I am a author I cannot easily use a format to allow end users better customization if they provide their own software to do so; they want to insist you to do it by yourself, regardless of what the end user wants.

However, it is possible to use mainly the existing HTML standard, to make something that would allow more user control.

Some possibilities include:

- ARIA mode. Use HTML and ARIA to display the document, using user-specified styles and behaviours, instead of using the CSS included in the document (with some exceptions, e.g. if it specifies a fix pitch font, then it can use a fix pitch font).

- Interpreter header. If it were implemented even in common web browsers (without requiring TLS), then it can allow to serve any file format and if the client software does not understand it, it can polyfill the implementation.

- Use of new HTML attributes, e.g. feature="usercss", feature="uploadname", rel="data", <html application>, etc. Existing implementations would ignore them, so it does not break compatibility.

- Use of existing HTML features, e.g. <link rel="alternate"> (to make available alternative file formats), etc. Clients that do not implement them can ignore them.

- Design web browsers for advanced users. Many Web APIs will be implemented differently (or not at all), e.g: when asking for a file to upload, also allow the user to override the remote file name, and may allow a system command line pipe (like popen) specified; when requesting microphone or camera access, the user specifies the source (again using popen, which may be used even if the user does not have a microphone/camera); etc.

- Other features in an improved web browser, e.g. key/mouse quoting mode, manual/auto recalculate mode (manual mode also prevents spying from unsubmitted forms and stops autocomplete from working), save/recall form data on local files (as a user command), HTTP basic/digest auth management (also as a user command), script overriding (also mentioned in a article written by FSF), etc.

- Request/response header overriding option for user setting. This makes many other options unnecessary, because you can use this option instead, e.g. language setting (Accept-Language), tracking (DNT), JavaScripts and other features (Content-Security-Policy), cookies (Cookie, Set-Cookie), HTTPS-Everywhere (Strict-Transport-Security), etc.

- Meta-CSS, only available to the end user, which can make CSS selectors that select CSS selectors and properties and modify their behaviour.

- Improved error messages. Display all details of the error; do not hide things.

- Some properties may work differently, e.g. if you try to access the height of the view, to return Infinity instead of the correct number (or define a getter on that property which throws an exception instead of returning a number).

- The structure of the design of the web browser in the backward way from usual: I think that a better way will be, that the components (HTML, individual commands within HTML, CSS, HTTP, TLS, JavaScript, etc) are separate .so files (extensions) that are then tied together using a C code, that a user may modify and recompile, or rewrite (and changing the components too if wanted) to make the web browser in the way that you want to do.

- External services which provide information and APIs for accessing APIs of other services including using command-line programs, that users may access to use them.


This is coming down to vendor control. My industry uses adobe more than I like and adobe knows photoshop is something people will try to pirate as long as it is a distributable application. Big business wants the next web so they get to say who loads their proprietary tools. Coming soon, you will be signing into all 'your' applications.


The only reason you can dig through the data now is because the browser gives you tools to do that. The primitive view source or the more advanced dev tools. A web where everything is rendered to canvas would still need to get some data and there is no reason why the browser would not give you the tools to inspect that data.


I think the point is that if the only API that's built into the browser is the canvas, then the browser itself can't know how you're actually structuring your data at all. So the most debugging you can get built in directly is telling you which lines and shapes are currently being rendered. Whereas currently, if you use the DOM, the browser can show you the full rendered tree, show you which elements have events attached, allow you to manipulate elements by putting them in specific states, or just let you delete and modify them in place. The same goes for CSS.

In practice, I suspect if this route were to become more common, then frameworks would provide these sorts of tools directly. The browser still couldn't directly see the component structure of your code, but a framework might provide a browser extension that it itself can hook into, so that you can inspect it as its running. The problem then is that each framework would have to do this separately, because there would be no single base component structure that the browser recognises.

Essentially, you'd go back to developing as it's done for desktop environments - Qt or GTK might provide their own debugging environments, but the operating system itself is generally running at a much lower level, and so can't help you much if you want to know why "foo" is being rendered instead of "bar".


Do we really want to continue having "a web" ? Remember, "the web" was never designed for what we do with it today. If you're thinking of overhauling "the web", I think it makes sense to start from first principles, and build something that meets today's needs with the least amount of unnecessary bullshit.

You have to start from the user's experience, because nothing else matters about a computer other than what you use it for. What do people actually want to do with a computer? It's going to be very hard to throw away all your assumptions. Don't assume that they want or need what we have today. They may not need the internet, or even a visual interface.

After you understand the problems the user wants solved, then you can go about building products that address those needs. You may think technology is clay that we can shape to fit a user's needs; but what if what they were best served by was metal, or glass, or paper? We must look through all technological mediums and components to provide the best solution.

Many of the developers today literally have never lived their lives in a world without certain inherent technological expectations and limitations. We need to open their minds and show them that they really can make literally anything with technology, and that it doesn't have to resemble anything that we have today. If you think "oh but we can't make change too radical, it wouldn't work", that's what they said before we ended up with the technology of today! The only box you're limited to is the one you put yourself in.


What feels unique about the web, that nothing else remotely resembles that makes it so much more compelling to me versus everything which came before, is urls. The web is a network of online resources.

There's HTML, an ok, fairly adaptable hyper-media markup- and CSS, and scripts, all fair or better- but the premise turns everything else we do in computing on it's head: servers send us resources, and we have a (imo pretty great) engine to render & execute our hypermedia.

I see endless torrents of anti-web attitude, from people who want more content-centric systems, from people who want applications. But almost never can the critics identify & mark out what makes the web better, different, & so flexible as to have arisen into all-pervading ubiquitous. By all means, consider first principles. I think you all have a lot of very important & enabling concepts you'll need to recreate along the way. The web's idea of urls & resources is one I think we'll have a hard time replacing.


While there are some good ideas, there are also many bad ones. (URLs are one good idea.) It could be designed better. Perhaps I can mention this quotation (which wasn't about HTML, although some of the bad things happen with HTML for similar reasons):

The sad fact of the matter is that people play politics with standards to gain commercial advantage, and the result is that end users suffer the consequences. This is the case with character encoding for computer systems, and it is even more the case with HDTV.


It mostly works pretty great. It's immensely popular to hate on.


> Do we really want to continue having "a web"?

Speaking only for myself, I'm not at all invested in having a website. Yes, I want a publicly accessible site of my own on the internet with which to share documents, and I currently do so over HTTP, but I would equally happy to share them over Gopher or even anonymous FTP.

However, I'd rather pay NearlyFreeSpeech.net for hosting than run my own VPS or self-host on a machine in my basement because I'm lazy, so I'm stuck with HTTP because they explicitly don't support anonymous FTP (and by implication don't support Gopher or Gemini).


I dislike FTP and think that it is not a very good protocol. (There is one advantage of FTP over HTTP, which is that it has directory listings, although you could also do with HTTP possibly by a new file format (e.g. application/httpdirlist) for directory listings.)

You can serve other file formats (including text/gemini) over HTTP. Although I could manage to make the web browser I have to support text/gemini files over HTTP (and local files), this isn't commonly done, and the protocol does not support file format polyfills (although I have a proposal that can make it work).

However, that isn't good enough if you want to serve NNTP or IRC or Telnet or something else like that. If I want to serve discussion forums, I will want NNTP. If I want fast communications, I will want IRC. There may be other possible protocols too. We shouldn't force everything into HTTP(S); it doesn't fit properly. (I do have a NNTP server, as well as HTTP and Gopher.)


> the protocol does not support file format polyfills

It does hover support file format negotiation so you can fall back to a different format where it makes sense.


Yes, you can use the Accept header; however, there are some problems with this:

- The default values are often not very useful; they might not list all supported file formats, and may have wildcards for formats that are not supported, too.

- The Accept header (and other headers) are often misused, which also makes it inaccurate. And sometimes if it is a picture, video, etc, the Accept header might make it difficult to load directly.

- The Accept header cannot distinguish if JavaScripts are enabled or disabled.

- If the end user wishes to save to disk (whether using the browser's save function, or using an external program such as curl or wget) or view source of the original file, this will not work. (Of course there is also possibility that you might want to view/download the alternate file instead, so that still will need to be handled, too. But if you want to view the resulting HTML tree then you can use the web developer tools, but that doesn't do all of the things.)

- See https://wiki.whatwg.org/wiki/Why_not_conneg for some more comments. I do not agree with everything mentioned there, but there are many good points too (some of which are similar to the ones that I had mentioned above). (Another note about video codecs: there are differences other than the codec too that a end user may want, such as the picture size.)

My idea is adding a Interpreter header; I wrote a document with my ideas of how I would intend it to work. (This can be used both for server-side and client-side interpretation, and also allows interpreter caching (and possibly also caching the internal representations) if the optional hash has been included in this header value.)

Also note that the HTML <link> command is also available in HTTP, and can be used with a response of any file format. (I have seen it used in one case for applying stylesheets to plain text files; I do not know what other uses work in what web browsers.) (I also have my own ideas about additional links, such as rel="data" to link to underlying data of a web app in case the end user will use their own software (e.g. SQLite) to query it. These can be used even without changing the web browser, but the Interpreter header requires the web browser to support it)


You want a publically accessible place for your public persona. You have got that. Why do you care about the protocol it uses? Why would you prefer a protocol that no one else uses? That no one has a client for?


The fact that I want to is reason enough for me. Why does that bother you? Am I somehow harming you?


> If you think "oh but we can't make change too radical, it wouldn't work", that's what they said before we ended up with the technology of today!

No, the technology of today developed extremely slowly for the most part, it evolved in tiny, tiny steps, sometimes taking centuries.


I think the post misunderstands the situation.

Its not 100% clear to me what the author's thesis is - but i think its that ever incrasing complexity of web standards results in bad user experience.

I dont think that is true - or as they say correlation does not equal causation.

The web used to be smaller and the corporate world didn't know what to do with it. It was creative and original. Eventually corporate america figured out what to do with it, and now it gets value extracted in a very impersonal way - as if your fave underground punk band sold out.

Complexity didn't cause this. It might be a symptom of this, but if the complexity went away, the corporization of the internet would still be there - the suits understand the internet now, and there is no stuffing that back in the bottle.

If neccessary, it would be entirely possible to recreate facebook in html 3.2. Its a social phenomenon not a technical one.


I agree. There have been numerous posts along these lines which seem to misunderstand the issue.

The post hypothesis is that [something is wrong with the Web] because [browser functionality]. But it's not browser functionality that is the issue here, it's the site author. The premise of the problem is wrong, thus the proposed solution makes no sense, and it will fail.

This is easy to demonstrate. Install a second browser. Turn off JavaScript completely. Disable cookies completely. You now have your minimal document viewer. Done. No need for a new protocol, no need to create anything.

Now feel free to view all your static-site destinations in this. It'll be rocking fast. Then use your regular browser for everything else.

I suspect you'll find that the browser is not the problem. The problem is that the sites you want to visit use JavaScript. You'll discover that your alternate browser is basically not used. Oh and that the sites it can use are equally fast in your main browser.

The browser is not slow. Sites _choose_ to be slow as the cost of other things they want (mostly advertising). Any site you think is slow in your browser would not be in your other Web.

Do new developers choose inappropriate tools to build simple stuff? Yes of course. Do they go with what they know best, yes of course. Do most senior developers do the same thing? Of course yes.

Is the solution a new protocol and a new browser? I fail to see how this would change anything.


This times a hundred. It's not that the complexity is the reason, it's simply the audience changing, whether it's the people creating the work or the people consuming it. Look at any online content platform like Medium or Quora or Clubhouse. Quality starts off high, but plummets as more people start finding/using the platform and more businesses see it as a way to make money. Or any entertainment medium. It's not unique to the web, film, television and video games all went a through a similar evolution.

People miss the old days, without realising that those days only happened because no one saw the monetary potential in a medium, and the few people with a deep interest in the field (and enough resources to work for free) created stuff as a hobby.


The problems with the modern web are hardly technical. We have the technology to make it better.

There is a single major problem: advertising. It is the default monetization path for most of the web, and it has become an unstoppable force of sites that are hostile to the user, siphon as much user data as possible, and use every marketing tactic available to trick the user into clicking on ads or agreeing to being tracked.

It has spawned a multi-billion dollar market of shady data brokers that sell user data to the highest bidder, and built an industry of adtech giants.

It has required passing laws to protect the user, and even those were late, not strict enough, impossible to enforce, and only available in small regions of the world.

It is pervasive and offensive, and this has to change. Progress is very slow on this front, but we need new monetization options that are as easy to use for both the user and content creator, and respect the user and their privacy rights.

Say what you want about Brave Inc., the BAT[1] is the most compelling of such alternatives. It currently uses user-friendly ads, but those can be bypassed by funding the wallet instead.

I'm curious about other alternatives to advertising, not about another web framework.

[1]: https://basicattentiontoken.org/


ask for donations.

Make good work, people will pay to support it. Works for NPR.


In theory I love the idea of a clean start for the web. I agree with the idea of a separation between "document web" and "application web". The problem, though, is that if we were to do away with the web today and invent a replacement I am entirely convinced it would be utterly corrupted by corporate interests.

The "document web" would be something very much like Google AMP. Those documents will need to be indexed so Google will get to choose what it looks like, what it does and how it tracks you. The "application web"... well, Apple could probably graft native app functionality into a URL, like App Clips. Google would do the same for Android, probably. Microsoft similar. Those three companies control the OSes everyone uses, so they'd get to choose.

And so on and so forth. More and more it feels like the web we have today is a miracle even though it's broken in many ways. We're very lucky to have a platform as open as the web is and we take it for granted at our peril.


I feel the same way and share the same concerns. However people have already experienced the open, free web. I don’t think you can take that away anymore.


It just takes a generation. We’re already on our way there.


"But the early web wasn’t fun in many conventional ways - you couldn’t quite create art there, or use it as much more than a way of sharing documents."

This person needs to go look at some Geocities archives. People were trying to create cool-looking pages as soon as we had the IMG tag. People were using tables to organize a bunch of images in neat ways. People were doing weird little hypertext art things.

They were not using the technology in the "right" way. Nobody gave a shit about the right way. It took something like twenty fucking years for CSS to make it easy to vertically center shit the "right" way after persuading everyone that using tables for anything but tabular data was "wrong". If we waved a magic wand and suddenly everyone just had this theoretical Markdown-centric browser? People would immediately start looking for clever ways to abuse edge cases of its implementation to make their pages pretty. And people would start making enhancement requests, some of which the browser makers would inevitably implement, and... eventually we're right back where we are now.

I am not a fan of the modern corporate-dominated web, neither am I a fan of the modern world where more and more apps are horrible kludges of JS and HTML that have absolutely no care about the UI conventions of their host platforms, and are a couple orders of magnitude more resource-hungry than their equivalents that use native widgets and compiled languages. (Concrete example: Slack's 440 megs on disc, Discord's 370; Ripcord, a native app that talks to them both, is 40.) But thinking people will happily go back to a world with little to no control of the presentation layer after decades of struggle for more of that is a pipe dream. We all quit using Gopher the moment we had a web browser on our systems.


It's a beautiful thing. I remember the days you speak of. I also remember the leaps and bounds people took to bend myspace to their will. All for expression. It's been a beautiful and very human journey

I'm not sure we would have gotten where we are now without those limitations for people to conquer.

There's nothing in http that requires a browser. We could have released multiple app platforms by now. But there's a magic in html, css, and js that is taken completely for granted. I remain a fan.

...Edit to add that we have added these platforms. And they can't come close to what we have on the web.

Roku, Apple TV, Nvidia Shield. These are application platforms. And they're ok for what they are. Android and IOS are the most successful connected platforms we have, and they're impressive, but I still spend more time in my browser than anything else on my mobile device.


still not easy to vertically center shit, especially variable sized shit. don't forget <marquee> and finding ways to make text blink. and gaudy section separators. AND MUSIC. when you went to my geocities app you got the 'low ride er' mariachi theme. that was sweet. pissed off all of the kids in my class while i was developing it though because it would only get through the first few seconds before i was making changes.

> thinking people will go back to a world

it's not even going back. the main portal to the web is now apps on phones. the web will continue to evolve, but i can't imagine a scenario where it forks.

the anarcho liberal wild west days of the web are behind it, unfortunately. i lament its passing along with a whole host of others who grew up in the land of bbs, usenet, and the like.


> This person needs to go look at some Geocities archives.

or some early net-art featured later on things like rhizome.org

While reading, i was thinking: either this person is my ch younger than me or we live in two parallel universe :)

The great thing is: i still seem to agree with fundamentals… :)


I share your views on the topic. Early days of web were amazing from the perspective of people trying to express themselves despite the technological limitations. <img> and animated gifs, tables, CSS, not to mention Flash... Obviously romanticizing to an extent, but to me personally there was value in people striving to set their content and themselves apart - vs today when it's so much about "streamlining" experiences and conforming to current trends.

There is a fundamental conflict in human nature between the need for freedom of expression and need for structure - and a balance to be found. The latter won in Facebook vs MySpace, and while I liked the clean UI and structure to content that Facebook brought, today I would very much prefer to again see chaos of people's expression in MySpace pages than chaos of bland, ad-riddled, and structurally overpopulated Facebook profiles.

But in what I said above, the shift is actually between something else entirely - from web being individual, to being corporate. It seems to me that we got to where we are on the web solely because of hyper-capitalism seeping into it, like into all other pores of society. There is a string of reasons Slack is 440MB - it starts with it being pushed to use a combination of technologies that are deemed to ensure quickest iteration, time-to-market, interoperability with user tracking systems etc (in this case, Electron / JS for multi-platform coverage etc); then you have more and more people using those technologies because they are sought after; then companies want to use those technologies even more everywhere because it means you can grow your teams faster on the market. Btw, all exactly the same reasons as to why almost every "website" today is a React app. All the while, the more information you can collect on people, the more attractive you are, even if you have absolutely no need for it or way of using it in your product for now - so almost every website also comes with 50+ XHRs on load to every imaginable tracking service. All in function of marked words above - "growth", "market", "faster", "user tracking" since those are the ones that are exclusively rewarded (not even with actual money any more - with market evaluation and other perversed and illusory constructs).

So it's not the technologies or the "web" themselves that are the problem at all - it's that hyper-capitalism hijacks them, requires targeting widest market, being quick to monetize on it in any way etc - which means experimentation and individual expression lose value, and any benefits for the user that don't obviously result in benefits for the corporation (e.g. user's disk space usage for banal Slack example) are cut off from consideration. It's only expected then that, when you look from individual perspective, the web is not individual-friendly anymore.


(The)Facebook used a sneaky trick to create scarcity by requiring you have an educational email address. It was an elite club, compared to the normies who had to use MySpace.

The force of FOMO that built up behind that wall, once removed, flooded the population.

You can say it was their style or whatever, but I think it was that one move.


... so why doesn’t Hacker News do any of that? Isn’t it ironic that it was built by a Silicon Valley VC?


Likely for the same reasons that many of the architects of social media UX design and recommendation algorithms don't let their kids use those apps.


It's an intentional design choice on the part of the author to only attract a certain kind of user (the kind that cares about information and nothing else) who likely conforms to pg's ideal culture fit of a 'good hacker,' and scare away everyone else.


Regular ass HTML and CSS works just as well as ever. Application frameworks solve problems that are convincing at hyper-giga Google scale (where an infinite number of participants and data models need to be consolidated) that aren't as important when I am trying to display words, pictures, and video to people across the world. For instance, if you're talking about advanced sharding, granular caches per data-field, or scaling microservices with the load on different parts of your application, then maybe you need something more complicated.

I like simple, and I like to be able to see changes I make in real time. React makes a lot of things more simple, so I use it often.


The web is far too big to invent all at once. Evolution is how it's going to go from now on. Good ideas will start out in niches.

But maybe Markdown is the document language you're looking for? And to get started writing docs and code together, something like ObservableHQ is a pretty good way to go.


(2020). Editorialized title ("A Clean Start for the Web").

Previously discussed in: https://news.ycombinator.com/item?id=24255541 (117 points, 21 months ago, 90 comments)


Thanks! Macroexpanded:

A Clean Start for the Web - https://news.ycombinator.com/item?id=24255541 - Aug 2020 (90 comments)

A Clean Start for the Web - https://news.ycombinator.com/item?id=24250252 - Aug 2020 (1 comment)

A Clean Start for the Web - https://news.ycombinator.com/item?id=24247362 - Aug 2020 (3 comments)


“One codebase, KHTML, split into WebKit and Blink”. That’s inaccurate. Blink is a fork of WebKit that was itself a fork of KHTML. And actually KHTML was really a super tiny project before WebKit.


> We hope that all this innovation is for the user, but often it isn’t. Modern websites seem to be as large, slow, and buggy as they’ve ever been. Our computers are barely getting faster and our internet connection speeds are stagnating (don’t even try to mention 5G). Webpage size growth is outpacing it all.

The author makes a good start at fleshing out this argument, but stops short. He goes on to talk about a gizmo that was added to his own work to track use of a feature, which then was abused by others in the company.

The problem is that the same motivation and tools that are used by developers to figure out how to make their products better are the same motivations and tools used by advertisers to sell crap and snoop on users. Exactly the same tools, very different outcomes. One improves the experience while the other degrades it.

It's not clear how to separate the two, allowing only the "good" uses of tracking technologies while preventing the "bad." For its part, the essay doesn't really provide an answer so much as start talking about technologies.

But if you really want to improve the Web, it makes a lot of sense to really drill down into what makes the Web bad today. It is really technological complexity, leading to rendering engine monoculture? Or is it something more sociological in nature?


I don't know, i want the web to be steered back to basics. The whole SPA thing should not exist, HTML is documents. Instead, native platforms should provide each of the widgets and UIs that the SPAs download in each and every page load. There's already a lot of <input> tags for things like date and such. We should have more native support for all kinds of interactions in touchscreens too. After all they are not that many - after so many years only few kinds of touchscreen interactions have survived

Then CSS needs to be trimmed down to basics too. Why is grid layout implemented in CSS instead of a <grid> tag, it doesnt make sense and is very hard to read.

Then there's the browser, which has remained fairly consistent and provides lots of functionality for navigation that is generally OK and well understood by people. SPAs destroy that by building another virtual machine layer on top of it (my pet peeve is ?force_cold_boot=1)

And then, why is WebRTC so complicated ? do we need to support all those codecs and nat traversal rules forever? Video communication should be as simple as adding a <videocall> tag

HTML should be readable by common people, they should be able to easily experiment with it, that's how the fun begins again


IMHO, the human race has degraded in maturity in the last 20-30 years, and today we do not have a critical mass of technically adept, intellectually mature and who are also skilled debating communicators capable of making this change without the entire process derailing, and the result being far, far worse than what we have today.

Just as described in the article: as soon as the opportunity presents itself, the business suits start mucking with the implementations and strategies, and at that point either the technically mature debating communicators step in and end the off-goal meddling or the entire process and effort is lost.

We do not have it in us anymore. We are not the generation that went to the moon. We're the consumer marketing ruined follow-on generation with a diverse mind filled to the brim with web3/crypto/socio-political-religious-end-of-the-world panic.

Case in point: https://www.cnn.com/2018/06/13/health/falling-iq-scores-stud...


> First, you need a minimal, standardized markup language for sending documents around

> Then, you need a browser.

I can't not think of the Alan Kay talk "The computer revolution hasnt happened yet" from 25 years ago. Some quotes:

"HTML on the internet has gone back to the dark ages because it presupposes there should be a browser that understands its format."

"...ever more complex HTML formats, ever more intractable."

"You don't need a web browser".

And in fact, this was correct? The model that we have, now has a spec so enormous, that it's common knowledge that it's impossible to build a new browser from scratch (although maybe somebody can prove common knowledge wrong). And having document nodes that create a DOM tree might be useful for... something... but it has proven to be a gigantic obstacle in delivering content on the web. Almost everything we do today tries to ditch this because it's just so complicated, but all the frameworks that work around this give us magic and deal with it in the background. It's therefore still slow and it will always be slow.

So maybe to fix things we should try something other than what we already had that didn't work.


100% agree. This is the reference, for those who are interested -- watch for a few minutes starting at https://www.youtube.com/watch?v=oKg1hTOQXoY&t=1280s


We only need an application web. A blog post can be just a markdown file. Want to read it? Use a markdown viewer (from the application web). Want to build a more complex experience? Build an application. You can hardcode/bake the content into it (like we do today).

An "URL" could instead be something like an application+datasource pair. The applications are automatically hashed and signed and you can provably verify they haven't changed if you want to. When you request an application to the network, you'll download it from the closest peer (maybe from multiple peers at the same time!). Since it's hashed and signed you only have to trust the signer.

Applications should be written in a single language, no more HTML+CSS+JS. Yes, we can and probably still should have separation between layout, style, and logic, but it should all just be in same language. In fact I've started exploring what that language could look like: https://flame.run/

Let's take the good parts of the current web and build a new foundation.


> A blog post can be just a markdown file. Want to read it? Use a markdown viewer (from the application web).

I had a idea that is a bit similar, which is a Interpreter header; if the MIME type is not understood by the client software, it can download an interpreter (i.e. a polyfill) (written in HTML or WebAssembly, although your own language might also be a possible alternative); if the client software does understand the format (or the end user already has their own implementation installed, possibly one that they wrote by themself), then it will use that one instead.

> You can hardcode/bake the content into it (like we do today).

While you can, it may make sense to be a separate block so that you can also use the data separately.

> An "URL" could instead be something like an application+datasource pair. The applications are automatically hashed and signed and you can provably verify they haven't changed if you want to. When you request an application to the network, you'll download it from the closest peer (maybe from multiple peers at the same time!). Since it's hashed and signed you only have to trust the signer.

That is a interesting idea, and could work (although making the URL longer, probably). Although it may be useful to add additional arguments too, like command-line arguments/switches (so an implementation could be launched by command-line arguments, too).

This would easily allow an end user to substitute their own application implementation too (by changing the "application" part of the URL), which is also an advantage, and can have a rewrite system to do their own. If it has a hash then you could also cache the application file if wanted (and if you do not have a working internet connection, even to operate on local files that the user gives permission to access), in addition to overriding it with your own implementation, too.

> In fact I've started exploring what that language could look like: https://flame.run/

I like this idea and a few days ago had started to make up the design of something called "VM3" (the name might or might not change in future), which has some similar goals and ideas. However, there are many differences too. VM3 is a binary format (designed to be suitable for both interpreter and for JIT, an static analysis; e.g. there is no way to treat a number from a general register as a return address or vice versa, other than using a static lookup table which is specifically designated as containing program addresses), and all I/O (including the equivalent of JavaScript's Date and Math.random) must be done using extensions. Extensions must be statically declared (cannot be dynamically declared), and an implementation must allow the end user to manage and override them too; polyfills (both included and installed separately) are also possible, too. VM3 can be used with any protocols (HTTP(S), Gemini, IPFS, etc) and any storage media (CD, DVD, etc). (VM3 is also not limited to a specific kind of user interface; you can have command-line, GUI, pipes, etc.)

Someone else also had wrote some ideas about TerseNet. Some ideas of VM3 are similar, and the capabilities of TerseNet could be implemented as a subset of VM3. Multiple implementation types are possible, and possibility of static text, and applications that are only lauched by the user, also has with TerseNet and VM3.

However, unlike Flame which it seem your idea to combine effectively HTML+CSS+JS together, TerseNet and VM3 do something a bit different: parts of HTML (the documentation format) are one format, and other parts of HTML+CSS+JS are the other format; they can easily be separated and work independently.


> For folks who just want to create a web page, who don’t want to enter an industry, there’s a baffling array of techniques, but all the simplest, probably-best ones are stigmatized. It’s easier to stumble into building your resume in React with GraphQL than it is to type some HTML in Notepad.

This can't be understated. I'm a web-dev noob. I'm doing simple stuff, HTML/CSS/JavaScript is what I've decided to learn after months of confusion about this or that framework which at the start I was very confused about until I found out they are all just JavaScript abstraction.


In a landscape consumed with pushing crypto scams and web3 nonsense it was nice just to see a small blurp about someone explaining basically how I feel.

I like the web for different things but they are in a bit of a state of conflict. Things like Wikipedia are virtually unaffected by webassembly while things like Twitter and Facebook might make good use of it.

The future has yet to be written but like the article I kind of just know we are in a transition period from what we have to ...something.


If there is a document web, it needs to have paid subscriptions baked in. Like a 404 page but for those who aren’t paying subscribers, and maybe even a way to show an article preview.

That’s the only way monetization would work, other than selling personal data or relying on goodwill. And like it or not, profitability is a necessary part of writing for the web, for many people and almost all organizations.


> Like a 404 page but for those who aren’t paying subscribers, ...

We got HTTP 402 and nobody used it because it was never standardized (I guess).


Aaand internal leaked restricted bug show firefox is testing its frontend to use blink[1] ( developed by google ). I thought abc.xyz which is owned by google was an obnoxious thing to do which meant the internet began (abc) and ended ( xyz ) with google. But now its clear that it really is just them playing god. Really wondering how long till ftc breaks up google. Every fucking engine is now chromium derivative or google made.

edit: added leak link.

edit 2: Turns out to be fake, confirmed by mozilla employee[2]. Stupid 4chan.

[1]https://boards.4channel.org/g/thread/86921913/

[2]https://old.reddit.com/r/firefox/comments/uovcdh/only_a_rumo...


> Aaand internal leaked restricted bug show firefox is testing its frontend to use blink

At this point, Mozilla clearly saying they are stopping Gecko or Gecko-based Firefox development might best thing that can happen to FF -- that is if it will be enough to get others to band together and finally replace Mozilla with an organization that actually cares about the needs and wants of its users.


Citation?

(I haven’t heard anything about this, and from what I do know of Firefox I can’t see how anything even vaguely like what I think you’re describing would be technically feasible.)



https://old.reddit.com/r/firefox/comments/uovcdh/only_a_rumo...: multiple authenticated sources confirm this to be a fake. Beyond that, some details about the presentation of the page don’t match what you’d expect (covered in the comments), and the terminology in the comment looks quite wrong to me too (it’s a mixture of technically incorrect, suggesting technical unfamiliarity, and overly specific, suggesting technical familiarity).


Surprised to see HyperCard not mentioned here.

https://arstechnica.com/gadgets/2019/05/25-years-of-hypercar...

It was a delightful foundation for the document web!


I also had similar thoughts without the eloquence of this article. The one area I disagree with the author is:

> Rule #1 is don’t make a subset. If the replacement for the web is just whatever features were in Firefox 10 years ago, it’s not going to be a compelling vision.

If you don't want a subset, and you like markdown, then gemini seems like the answer. It is definitely gaining popularity but I'm not convinced it's enough to convert the general audience.

I think supporting everything but javascript for the "document browser" would automatically enable a ton of websites and make it easier for creators and consumers to use the tools and languages they already understand.

https://erock.lists.sh/browser-monopoly


> I think supporting everything but javascript for the "document browser" would automatically enable a ton of websites and make it easier for creators and consumers to use the tools and languages they already understand.

There are some merits for that, but I think that is both excessive and deficient at the same time.

> https://erock.lists.sh/browser-monopoly

I think that these are valid points, which I have some comments relating to.

> Support HTML 5, CSS 3, HTTP, TLS

You can add other file formats and protocols too, such as Gemini, Gopher, and possibly Markdown too.

> Maybe remove website specific styling altogether and instead design a consistent design that optimizes navigation and readability.

That is what I think too; even the HTML with no CSS, will be OK. Perhaps let the end user to specify colours, fonts, etc.

> Allowing the user to query for information and have that data displayed without full page reloads (AJAX)

For data-oriented stuff, you could also have such things like <a rel="data">, you can just get the data and use your own software to display it.

> I want a minimal, modern set of browsing tools where I don't have to make any sacrifices between usability and compliance with the standards.

Yes, I think so, too. Splitting all of the components separately is one start, I suppose (although it is not good enough by itself) (a C program can then be written to tie them together, and this can be changed as needed, including to add/remove components) (this is like reversing core and extensions the other way, so that e.g. HTML and HTTP are now extensions instead of core). However, half of the standards are I don't want quite compliance and should deliberately implement them in a better way than what they say.


It's strange but as - I suppose "veteran" (sigh) Web guy, I just don't see the problem of building stuff(+) in html, css and maybe a sprinkling of js + PHP or whatever else serverside if you need it.

(+) stuff for me = websites, not web applications.

For Web applications, maybe all this vue / react / whatever stuff is worthwhile, I don't know, I'm not qualified to know (although let's face it, developers possibly might have a slight tendency to maybe over complicate things...just sometimes...)

But I struggle to see what in a normal website experience isn't more than provided by these basic tools.

Oh, and great content. None of any of this matters if you can't write...


Almost all of the companies with power on the web are invested in continuing with it as it is. Given that, the only way to change it so substantially would be to create some kind of grassroots movement for a new version. That's an uphill battle--how are you going to get ordinary people to care? How are you gonna get the average, urgh, "content creator" to ignore network effects and use your platform? What's the "killer app", especially considering it's meant to have less features than the original web?


The biggest challenge for decentralization is cost - who pays for the machines that run the code, irrespective of what the code is written in.

The reason we got centralized was simply who pays to power the chips!


Just stop building your web "apps" on the client side. Why should I pay to save on your server costs.

An app is an app, a website is a website, don't try to make them the same thing always.


This kind of initiative could take off. Only if it is backed by a big player. Basically Apple Google and Microsoft.

WebAssembly is not a stand-alone application development package. It doesn’t have many of the functionalities of Flash, Applets, Silverlight or even NaCl. It has no API on its own to interact with display or keyboard and mouse. and none of the features you find in a VM such as multi threading or a memory allocator


> Firefox wasn’t the #2 browser - that’s Safari, mainly because of the captive audience of iPhone and iPad users. But it was the most popular browser that people chose to use.

Maybe I'm wrong here, but I struggle to see how one can arrive at that conclusion. I imagine a large percentage of users actively chose to not use Firefox over Safari because of it's integration and their familiarity.


Also wouldn't that still be chrome? Even though chrome comes preinstalled in some circumstances, the number of people that choose to use it is certainly far higher than the number of people that choose to use firefox. Not to mention that firefox is virtually in a tie with 5 other no-name browsers.


I definitely choose Safari even when on my computer I can use whatever browser I want. It’s a bit presumptuous to assign a reason for Safari popularity without evidence to support people don’t choose it. Not choosing a browser is also a choice. Safari does what they want so they don’t need Firefox.


We need to go back even further. Consider one of the things Markdown makes “easier”… making text italic or bold. Now ask, why do we have to resort to special syntax formats for this? It should be a simple function of the keyboard itself, no more complex then shift (or caps lock). Extend this to superscript, strike-thru, etc. and a good chunk of Markdown just evaporates.


One thing I don't get. We use Web for apps because desktop and mobile apps aren't portable. But why don't we use instead Java or .NET apps, which can be portable?

It would also simpler for the app developers who would program against a simpler model than that of web apps.

And we can use the Web to deliver information i.e. documents.


If you want a web only capable of delivering static documents, you may be interested in Geminispace.

As far as why we use the web for "apps" (which seems to mean any site with any non-trivial interactivity provided by javascript, as opposed to any site emulating the function of a native application, which there are few of,) we do so simply for the reason that the web as a platform already exists, and is ubiquitous, and provides a single development distribution model for text and interactivity, and that is more convenient than forcing text to be in one place and everything else to run in separate application spaces.

Running code on the web was conceived of as a possibility from the beginning, the premise that it was only ever supposed to support static documents, but then got 'corrupted' into supporting "applications," is kind of a modern retrofiction.

Also, the premise that only static documents contain information, whereas applications don't, isn't correct. The vast majority of what are considered "web apps" are, functionally, just dynamic multimedia documents.

For everything else - actual web applications - WASM will probably be the solution.


I would say .html, .css and .js and the browser are becoming over-engineered.

HTTP and SMTP are fine however.

I have stopped coding .html and only use HTTP now from a native OpenGL client written in C.


I would agree, although HTTP and SMTP are not the only protocols; in many cases other protocols should do better (such as IRC, SSH, etc).

> I have stopped coding .html and only use HTTP now from a native OpenGL client written in C.

How is that working? Do you have any further details?


SSH is not a frontend protocol, but I use that for the backend.

IRC is replacable with HTTP.

It's a 3D MMO: http://talk.binarytask.com/task?id=5959519327505901449


This is an idea whose time has come.


The road to hell is paved with good intentions and do-gooders.


this observation from 1997 always rings true. watch a few minutes starting at https://www.youtube.com/watch?v=oKg1hTOQXoY&t=1280s


Thanks for reposting this, dannyow! I'd been trying to find it forever.


I don't know if I really feel like there's anything wrong with the web. Sure its harder to make slick looking needlessly infinitely scalable socially addictive platforms, but nothing has undermined the ease of writing an html page. And its inarguably easier to find free and excellent resources for learning to develop fir the web. The problem I feel is a blocker for active participation of average citizens is the difficulty of starting / maintaining your own server. Domain name registrars and hosting services are idiotically opaque and difficult to use without giving in to ridiculous paywalls. Even Tim BL admits the domain registrar system was hugely flawed from the start in Weaving the Web. Im just pissed cause I was messing with cname and a records all day...


> It’s easier to stumble into building your resume in React with GraphQL than it is to type some HTML in Notepad.

It is untrue. You can write a plain HTML (even without CSS) and it will work OK.

> Webpage size growth is outpacing it all.

It is true, unfortunately. They waste too much space by adding too much extra pictures, CSS, JavaScripts, videos, advertising, etc. You should not need all of that stuff.

> Not only is it nearly impossible to build a new browser from scratch, once you have one the ongoing cost of keeping up with standards requires a full team of experts.

Yes, this is the real problem. However, some of the standards simply should not be implemented, and some should be implemented differently than what the standards say, to give advanced end users better controls and improve efficiency and many other things.

> We hope that all this innovation is for the user, but often it isn’t.

That is true, it isn't. To make it for the user, design the software for advanced users who are assumed to know what they are doing and that the end user customizes everything. Make documents without CSS; the end user can specify what colours/fonts they want. Make raw data files available; the end user might have programs to view and query them (possibly using SQLite).

> There is the “document web”, like blogs, news, Wikipedia, Twitter, Facebook.

I do not use Facebook, but I can comment about the others. There is NNTP, too. Wikipedia (and MediaWiki in general; not only Wikipedia) uses a lot of JavaScripts and CSS too; you can add your own, but removing the existing ones and replacing by your own will be difficult. If you want to replace the audio/video player with your own, can you do it? What is needing is making actual proper HTML, or EncapsulatedHyperEncyclopedia format, perhaps.

> Basically CSS, which we now think of as a way for designers to add brand identity and tweak pixel-perfect details, was instead mostly a way of making plain documents readable and letting the readers of those documents customize how they looked.

I still often disable CSS, but sometimes result in big icons and other things still wasting space. Maybe if disabling CSS would support ARIA and other things might to actually make an improvement.

One idea that I have is that if your document only uses classless CSS and that a web browser that supports the semantic HTML commands (and not the presentational commands) should display it suitably, specify a feature="usercss" attribute in the <link> and/or <style> element that specifies the stylesheets, so that a web browser can understand to use it. This way, it will be up to the end user to set their own styles as they want to do and can effectively use it in favour of the author's one without breaking it.

> Though it’s going to be a rough ride in the current web which has basically thrown away semantic HTML as an idea.

Semantic HTML can be a good idea, and still is sometimes used, even if it is often ignored (and sometimes not implemented in the client side, too) in favour of bad stuff instead.

> Rule #3 is make it better for everyone. There should be a perk for everyone in the ecosystem: people making pages, people reading them, and people making the technology for them to be readable.

Yes, it is true. For people reading pages better, do not have any styles specified in the document. Let the end user to specify their own colours/fonts/etc, and different implementations can render them as appropriate for the interface in use.

> I think this combination would bring speed back, in a huge way. You could get a page on the screen in a fraction of the time of the web. The memory consumption could be tiny. It would be incredibly accessible, by default. You could make great-looking default stylesheets and share alternative user stylesheets. With dramatically limited scope, you could port it to all kinds of devices.

Yes, these are the greater benefits.

> What could aggregation look like? If web pages were more like documents than applications, we wouldn’t need RSS - websites would have an index that points to documents and a ‘reader’ could aggregate actual webpages by default.

Yes, that will work, although you may want metadata fields to be available. (An implementation might then allow end users to specify SQL to query them, or other things are possible.)

> We could link between the webs by using something like dat’s well-known file, or using the Accept header to create a browser that can accept HTML but prefers lightweight pages.

The documentation for dat's well-known file does not work (it just tries to redirect).

Using the Accept header is possible, but has its own issues. You might need to list one hundred different file formats to indicate all of them, you might want to download an arbitrary file regardless of Accept headers, etc.

> Application web 2.0

There are problems with the containers, web applications, etc, which is that they do not have the powers of UNIX, TRON, etc.

About separation of application web vs document web, I think that they should not be joined too closely together nor separately too far apart.

I have may own design which is VM3 (the name might or might not change in future), and we can then see what we will come up with. (It could be used with static document views only, executable codes with command-line or GUI or other interfaces, etc. The design is meant to improve portability and security, as well as user controls and capabilities.)


The digital equivalent of "we screwed up Earth, let's go to Mars and screw things up all over again!"


/me patiently waiting until Chrome is shipped with Dart VM again, so Flutter apps can run natively in the browser.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: