I made a little nojs shared drawing board a few years back that worked with MJPEG as the background of an image button ‘<input type="image" ...>’.
Clicking the image button submits the form with x and y click coordinates, and the server returns an HTTP 204 telling it to stay where it is, followed by pushing out the updated jpeg to all connected clients.
It’s pretty fun, and I wanted to have it online all the time but it has a problem I haven’t sorted out where sometimes when a client disconnects the server won’t realize and trying to push them a jpeg locks the whole rendering system.
204 is a wonderful rabbit hole! You can use it to send messages to the server without refreshing the page without JS. It was a pretty common trick in the early 2000s
Send an actual error page rather than a 204. It's not perfect, but in cases where the server returning an error is very rare, it's fine. Since there is no state anyway, the user hitting back is fine.
Most JS apps in my experiences fail to handle server errors with any sort of user feedback at all.
That problem should definitely be solvable. I used MJEPG to create my own streaming server that wasn’t mjpegstreamer and even with HTTPS and reverse proxying there are ways to detect when to stop sending the data, though it isn’t fun to find all the edge cases.
I need to revisit and update that project sometime but it was a very neat event loop C implementation that reads frames from a cheap consumer webcam and without doing any reencoding feeds it to connected sockets as an MJPEG stream (it does have support for re-encoding if the camera can’t produce JPEGs). Was a really fun exercise in minimalism and efficiency as my goal was to stream multiple cameras from a single Raspberry Pi back when they were super slow.
The core of this is of course the response header "Content-Type: multipart/x-mixed-replace" which in theory lets the server push updated chunks of any content type (although apparently only images in Chrome since 2013 [1]).
While it's a crude and handy way to update some content after the initial load without JS, it was also widely used with JS before Web Sockets - to push messages to clients with less latency than XHR polling.
The big problem with this spec (which is shared today by all the HTTP uploads in the world) is that you have to check for the delimiter every byte!
"Content-Type: chunked" is much better because it gives you the size of each chunk upfront! But that requires .js and also was buggy in IE until version 7.
omg! I thought it would slowly create and send along an mjpeg video file. Server push animation is a 90's thing. I wrote something up about it in 2009, and posted a still-working demo, because of a thread here. https://pronoiac.org/misc/2009/10/server-push-animation/
So now we know that a #1 post on HN has about ~450 concurrent visitors
Edit° (~1 hour later): So I've been checking every few minutes, and it's consistently been at ~450, this is a cool metric to have.
°This link was posted to HN ~4 hours ago. I first checked ~1.5 hours after it'd been posted, and it was #1 on the front page. It's still #1 as of this edit. The counter has consistently been between 445 and 455 the whole time.
I think it started falling over around 450. I've been getting 500 on my Nginx for workers, so I bumped that but the resulting Nginx restart killed the count and it is currently at 217 as it has recovered but I probably lost tons of idle tabs.
The application has retained a 3 day uptime. But I don't think the 450 number is entirely true with Nginx causing an artifical cap on it. Would probably be higher if I didn't hit that limit. Drats, upped the limit, hope it recovers.
I had a top post a few years ago and I think in total I had around ~60k visitors over a couple days. Have a bunch of screenshots lying around somewhere for a blog post that I never ended up writing.
Dramatic fall was Nginx config bump and restart. Lost all those sweet idlers I'm sure. Hopefully it can go higher now with some luck but we might be past the peak. Really curious how far it can manage.
Re-reading your comment, perhaps I misinterpreted it.
> So now we know that a #1 post on HN has about ~450 concurrent visitors
I thought you were trying to get a measure of traffic to HN. In that case, the number of visitors to the site is only an approximation. But if you were talking about how much traffic HN directs to the #1 post, then the number is exactly correct.
He mentioned chunk streaming text in an iframe I think as a joke, but I remember actually implementing a decent chat web app exactly like that years ago in the IE5 days or so.
A company I worked for in 2001 used chucked streaming to provide search results. The search results were collected by performing live searches in parallel (server side) with multiple partners and then aggregated into a single stream of chunked content. Basically the HTTP connection was kept open until all searches were complete, which would take a few minutes. The results were pushed incrementally to the browser as soon as they were available
This worked surprisingly well. They started with almost pure HTML with the results split into multiple tables to produce what looked like a single table of results. The only drawback was that the columns needed fixed width so that they would align properly.
After a while they switched to a Javascript based solution with dynamic filters. They used the same backend streaming engine to output chunks of Javascript code inside <script>-tags, which called a global "addResult(...)" method to update the state. If the search was cached, it would first render HTML code server side, which was then "hydrated" by the javascript code if needed.
Later it was replaced with a standard XHR polling mechanism.
It is interesting to look back at what we had to do that is trivial with today's technology.
Yeah you basically never close the response, right? Just keep appending! Chat is a good use case for it.
Maybe with CSS these days you could make only the last child of a div visible, and have constant updates with no need for JS.
Anyway. I quite like this idea of no JS. I know he said it would be evil to serve ads using MJPEG, but I'd probably prefer it if my page had less JS on it.
> Anyway. I quite like this idea of no JS. I know he said it would be evil to serve ads using MJPEG, but I'd probably prefer it if my page had less JS on it.
Honestly I don't see how it's more evil, just because it doesn't use JS surely doesn't mean uBlock and such can't still block it? To be evil all ads full stop would have to be evil. As you say, I would much prefer this to ads that use JavaScript. If you can still show me ads on your news article when I haven't got JS enabled I'm not going to be too bothered, because hopefully everything should load plenty fast. It also means the rest of the site can load while the ads are loading.
At FriendFeed, when we first introduced real-time comments and likes (pretty sure we were the first social network to do this), we did it with long-polling via an iFrame because web sockets did not exist. It actually worked pretty well!
Hey, that's mine! But yeah, it includes an alternate approach to the same problem: use http chunked encoding and continually append html to the web page (which never quite finishes loading).
- dynamically generated animated GIFs
- Content-Type: multipart/x-mixed-replace : the way to animate images on the web since 1993 - before animated GIFs
I have the feeling that as soon as the web got obsessed with HTTP(s) only transactions between nodes, people reinvented/rediscovered/hack-reimplemented socket's semantics on top of it.
It's hard for the web to become "obsessed with HTTP", because web is HTTP.
Pedantry aside, as soon as web/HTTP became The Internet (protocol of choice), you are right that the rest was bound to happen.
However, exactly the fact that it wasn't too constrained and allowed a lot of messing around (including with HTML, compared to eg gopher which was more semantic) is what made it "win" over all the other protocols except maybe email (and even there, >50% of people read it with web clients).
I think GP's talking not just about server-client connections but also server-server connections? In the latter case HTTP(S) wasn't as popular as it is today until the late aughts.
I was speaking about the web as a net-work (not just web as in HTTP webpages). There are other protocols such as FTP, SMTP, and direct socket connections over TCP and UDP, but many of these use cases were absorved by REST only (HTTP) transactions and now websockets.
That's exactly the misconception I was trying to correct, assuming you made it while understanding the difference.
That suite of protocols is a suite of internet protocols, and "web" (from "world wide web", certainly familiar from www in websites) is a combination of HTML served over HTTP. If it has evolved to mean the internet, my apologies, but 10-20 years ago if you said "web", you meant HTTP:
https://en.m.wikipedia.org/wiki/World_Wide_Web
no... you are just trying to be pedanting. Web, the metaphor coming from a spider web and commonly understood as the interconnection formed by networks, could be perfectly understood as inter+net as well. And web+page as what the HTTP was purposed. And all that text brings nothing to the conversation.
Why not an iframe with a meta refresh tag? That seems more standard than rendering text using a video stream. Sure, it's not "live" but an update every second or so would be good enough for this use case.
Seems like an interesting variant of SSE (Server sent events) without the JS code for retries. Both of these approaches are simpler than websockets, with the limitation that they're one-way (server push).
This is great to know about. But on qutebrowser (QT webengine) it doesn't update. When I tried it on Google Chrome, it worked as expected. So maybe not as universal as the author believes.
In the light of this post, I'd like to share that I made an almost "no js" live monitoring service a few days ago. Again with the help of Elixir (and Phoenix LiveView). It tracks everything that's posted on Hacker News and Reddit in "real-time", and distributes it to everyone listening to it. And it tracks the number of online users, too. Of course.
Apparently HLS (HTTP Live Streaming) requires JS on every platform aside from Apple's. I was looking at that for streaming video + captions but since it requires JS to run it didn't really achieve what I wanted it to.
Maybe you could build an in-memory video stream for each user and just serve it slowly but then they would likely need to press play or you rely on autoplay and I've no idea how well that would play with buffering behaviour.
This solution maps quite closely to the idea that MJPEG is intended for "live" video. And also I find it adorable that it is just an img tag.
Almost every browser accepts, and sends, byte-range headers for video, which is pretty easy to map for an endless stream - you don't have to return the actual length of the video, and if you don't, browsers will treat it as an endless stream.
Whilst you do have to actually trigger the play somehow, the range will come in controllable chunks, so you can respond in kind without trashing the stream.
You can do this with the WebM container, not with MP4. The reason is that the MP4 container requires the video frames to be segmented properly and marked into a metadata part of the file which can be in front or end but cannot be created with an infinite stream. WebM (i.e. VP8/9) is not supported on Apple's platform so you also have to have a fallback of HLS + H.264. But in general its doable with WebM and works pretty nice with low latency compared to segmented formats like HLS and DASH. It's impossible to cache on a CDN of course, which is not the case with HLS.
i actually use fragmented mp4 streams as a method to live stream video directly to a browsers video tag. I can point any common browser directly to that stream and it will start playing immediately... ive written a small utility to proxy requests to an RTMP server and repackage the stream for HTTP clients using just a little bit of overhead.
Not really. The browser need to read the header to know where the video frames are in the file.
The table with pointers to all the frames cannot be made before all the frames are encoded and their size is known. Then you can shuffle the file and move the table to the start (known as faststarting), and in that case you can start viewing the video before it is completely downloaded.
This can't be used for live content, since the encoded frames does not exist at the time you start viewing in that case.
Does it have to know the information for all frames, or just the key frames? If it’s the later, then you could encode so the initial table, has X key frames spread out over a few hours, then force a frame to exist in those locations when encoding the live stream.
You cannot use MP4 for live streaming. The file cannot be demuxed properly if the moov soon is incomplete. Frames cannot be located, it is impossible to decode it in the end. With live streaming, you don't ever have the full file until the end of the event, which is why you cannot stream it. Video streaming with a complete file is possible of course, but that is nothing special nor related to this discussion.
HLS is one of those standards that I find absolutely maddening and clearly only exists because a number of things are completely broken. It also makes it much harder to capture or intercept the stream for your own purposes.
Browser/firewall refusing to accept anything other than HTTP? HTTP range requests broken? Connection keepalive broken? I know, we'll segment the video into blocks and send a list of the blocks to download as individual files!
mjpeg is a normal, albeit very simple video stream. While you could certainly squeeze the counter in less bytes by using a more advanced codec with interframe coding, it would be overkill for this poc.
The point im trying to make is more that video streaming is a standard technique with off the shelf solution. The author presents MJPEG as some long lost solution to do this without js, but you can just as easily throw in a <video> tag without JS and call it a day. Except video tags arent as exciting.
If you did that, wouldn’t you need to keep generating frames regardless of whether they updated or not versus this solution that only sends a new jpg down when concurrents change?
Clicking the image button submits the form with x and y click coordinates, and the server returns an HTTP 204 telling it to stay where it is, followed by pushing out the updated jpeg to all connected clients.
It’s pretty fun, and I wanted to have it online all the time but it has a problem I haven’t sorted out where sometimes when a client disconnects the server won’t realize and trying to push them a jpeg locks the whole rendering system.
https://github.com/donatj/imgboard