Hacker News new | past | comments | ask | show | jobs | submit | joombar's comments login

I realise not the usual format for a story. But to Gawk... in this world of zero downtime deploys this is remarkable.

As the sun rises in the UK, this is half the business shut down during business hours.


"And they're right- a boy who enjoys t-shirts and sports is at a tremendous social advantage over one prefers dresses and ballet"

Perhaps in some very conservative societies.

Where I live I'm confident that a boy who enthusiastically engages in things which actually interest him is at a social advantage over boys who halfheartedly pretend to enjoy in whatever his parents think is normal.

Ballet, for sure. Maybe not dresses so much, at least not if he is doing the wearing, but having an interest in the clothes of his female peers certainly isn't any kind of social problem.


For small messages it will be slightly slower in terms of CPU time because of JS vs native DOM parser. But for small messages the time taken with modern JS engines should be less than a monitor refresh.

For small messages on fast networks there really isn't very much time to save, so for this worst case in terms of looking for somewhere to optimise I think it is best to aim for no significant performance impact. The best improvements are where the messages are very large, on slow networks, or where the server can write out a stream.

The XHR2 spec specifies "While the download is progressing, queue a task to fire a progress event named progress about every 50ms or for every byte received, whichever is least frequent"

Because of "whichever is the least frequent", for messages that take less than 50ms from first byte to last, Oboe.js will get only one callback from the browser and notify all callbacks from inside the same frame of Javascript execution. This is exactly what we want it to do since it avoids rendering happening between the callbacks which would be very bad for overall download time.

I'm in two minds if Oboe should be dropped in to replace JSON AJAX calls, because everything will be accessed from a slow network one day, or if it should be used only for the very large, streamed REST responses.


Two different user agent strings for one version of one browser? That can change at any moment based on a list the site authors don't control? There's no way that'd ever confuse things!


Which is precisely why good web developers never trust user agents. Because they confuse things and cause bugs like this.

It's lazy/incompetent practice, which shouldn't really excusable for such a supposed massive advocate of web standards. MS are no shining example, but it's pot calling kettle black.

Definitely a Google bug - a long standing one


There are several reasons why UA detection is relevant where feature detection fails. The first is that you can serve prebuilt js binaries that have only the relevant logic for the requesting browser; strictly smaller js files for every user since you aren't serving irrelevant js.

Some things aren't feature detectable, like actual weird bugs in behavior that need browser specific workarounds.

In this case it looks like MS has the domain listed for their fallback mode, which doesn't just change the UA but actually changes the browser behavior (IE9 in IE8 mode has no canvas support and will fail the feature detection for canvas appropriately).


Just merged in support for reading any stream in Node:

https://github.com/jimhigson/oboe.js#reading-from-any-stream...

  oboe( fs.createReadStream( '/home/me/secretPlans.json' ) )
   .node('!.schemes.*', function(scheme){
      console.log('Aha! ' + scheme);
   });
   .node('!.plottings.*', function(deviousPlot){
      console.log('Hmmm! ' + deviousPlot);   
   })
   .done(function(){
      console.log("*twiddles mustache*");
   });


Well, you could have a JSON which was valid at the start and invalid at the end. It'd parse the first bit ok and only throw an error when it got to the invalid bit.

Nothing gets parsed more than once. SAX parsers already parse streams, they're just not used very much because they're a pain to program with.


No, although that's a nice idea and something I'd like to look into.


Quick experiment says: Gzip can be written out as a stream ok. Can't comment on Apache but Node does it fine.

Firefox's xhr fires progress events for gzipped content but not Chrome's. Looking to see if I can find a way round it.



in my experience, you need to set flush to SYNC for it to work on chrome without manually flushing


Do you know how to do that from node?

Here's the little test service I wrote to stream out some gzipped content:

https://github.com/jimhigson/oboe.js/blob/master/test/stream...


You need to construct the stream with flush: require('zlib').Z_SYNC_FLUSH


It should make most calls faster. Exceptions are for small JSON files or on networks that are fast enough there is no streaming effect (the whole file arrives very quickly)

For most sites there'll be some users where it will make it faster (mobile, slow internet) and others that it'll be about the same. If the network is unreliable it should help as well because when the connection drops you don't lose what you already downloaded.

"makers" = just me :-)


it is built on top of a sax parser.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: