If you don't care about old browsers you could use the same connection to keep the state updated as you used to download the original state.
I built this for more-or-less standard downloading, only quicker. But, yeah, if you set a server up to feed it you could use it for lots of creative things.
I haven't looked at MXHR but here's roughly what Oboe does:
1 Create XHR, listen to XHR2 progress event.
2 Use Clarinet.js SAX parser, scoop up all events.
3 From SAX events, build up actual JSON and maintain path from root to actual node.
4 Match that path (+ some other stuff) against registered JSONPath specs.
5 Fire callbacks if they pass.
Interesting. After looking at the MXHR more, it appears as though it was adapted from Digg.com[1] (aka DUI.Stream[2]), and then later adapted by Facebook[3].
Yours sounds more elegant in that it can handle JSON naturally, but I wonder if the other might be better suited for binary content (not sure in which context that would make sense, if any).
Either way, I find them all fascinating, and I've starred your project :) Will keep an eye on it.
I suppose you could make a binary equivalent if you needed to. You'd need to make some kind of binary matching language, maybe like Erlang's binary matching.
Adding XML/XPATH support would be a natural extension.
Hadn't seen that before. Interesting link, thanks.
Very similar except JSON/JSONPath instead of HTML/CSS. Oboe runs fine in Node but I want to make the code a bit more standards-y. Ie, using Node's EventEmitters instead of the little pubsub I made for the browser.
Random tangent: no need to make your own pubsub by the way. I figure you might be interested in component[1]. Makes writing libraries like oboe even easier because all of the little stuff is packaged up for you already, like emitter[2]. And then tons of the little helpers in oboe could easily be their own components and useable by others. Check out the full list[3].
It would be really nice to have the same streaming interface (with EventEmitters) as Node, but you I can just shim that on top of Oboe I guess. It would be great to have the same patter as in Node.
Btw what are you using for the Node side, JSONStream?
There's only a node side so far as I needed to write for some component tests. It'd work with anything that writes out valid JSON.
The client side works in Node right now as well as in the browser but it is a bit browser-y. It is on my home office Agile board to make it a bit more node-y.
There's a cool thing you might try where you can download the historic messages, then continue to stream live ones over the same http. If you don't care about old browsers (anything without xhr2 progress events) should work fine to download and stream on same connection.
I'll have to test to be sure you still get streaming. It depends how the browsers handle xhr2 events with regard to gzip'd http. I /think/ it'll be fine but I need to check to be sure. Eg, with gzip on you still get progressive html rendering.
Gzip is a streaming format -- it's designed as a compressed format for communication streams. Browsers have no trouble with this. The Apache behavior you describe is probably related to buffering settings, which I think can be configured.
I suppose the example is a little artificial. It isn't really for using some of the JSON response while ignoring the rest (well, you can use it for that but it isn't the main use).
I got the idea for this project working on data vis. Not all of the data was visible and we wanted to display the first bit of data quicker without waiting for all of it to arrive. We could have just sent the visible bit but it was good to have some data ready in the off-screen section for when the user scrolled.
Before that I worked on a service where we were aggregating 6 or 7 services into a single JSON. Some of the services were quicker than others but because the AJAX lib we were using waited for the whole response they all had to go at the speed of the slowest component.
We could have done multiple requests but it was more elegant to serve a whole page's json in one call. Also, we cached the slow services so they were only sometimes slow.
> well, you can use it for that but it isn't the main use
Actually, you could use it for that, as long as the library actually finishes downloading the file on abort. If you need contents from the file again, you just download it again and as it will already be cached, that operation should be extremely fast.
I built this for more-or-less standard downloading, only quicker. But, yeah, if you set a server up to feed it you could use it for lots of creative things.