This amuses me. In my experience I haven't seen any porn site player being faster or better than youtube; or may be I have too little experience to make a judgment on this. I find the fast-forwarding on youtube most efficient when compared to other video playing sites on web. The switching between full screen and half screen acts cranky at times, but other than that youtube videos run just fine. Also, many players don't even allow fast forwarding.
Hm, but wouldn't that be specific to the current happenings rather than the basics, even if you go for an years subscription. It will definitely give a lot of insights. I don't follow 'The Economist', so you can correct me.
Not at this point. I am a developer and I run a consultancy with one more guy, so I am generally short of hours. But I will see how much I can grasp on my own and then may be consider something.
When you are using a ruby or python based web framework, each request is a blocking request i.e. after each request you wait for the response before proceeding further e.g. urllib.urlopen(), which as a result causes the no of requests handled/second to go down. Using eventmachine or gevent respectively for ruby or python is one way to overcome this.
In node.js there is no such concept of blocking requests and everything is processed in a single thread. But you can always fork multiple processes using its cluster module http://nodejs.org/docs/latest/api/cluster.html
edit: Oh, I was not trying not advocate anything. nodejs is indeed just a choice, if I didn't make myself very clear when I said "My understanding is:"
Ruby and Python (not to mention basically every other language) have a plethora of threaded, forking, and evented models for concurrency. Thin/Rack/Sinatra, for example, behaves capably in a thread-per-request mode. Reactor systems are not the only (nor are they even preferable in many cases) choice.
How did you find gevent and totally miss Twisted, which is linked on the front page of nodejs.org?
There are a handful of frameworks that assume every request is fully concurrent and non-blocking. Twisted-based stuff is the most prevalent, for Python, like Nevow, Athena, and the stuff baked into twisted.web. However, WSGI itself doesn't say anything about the concurrency of individual requests, and it's totally possible for WSGI requests to be multithreaded, multiprocess, or otherwise concurrent.
Yes, thats pretty valid. But I have seen people trying to promote their product on support forums facing a huge backlash. Although I can assume that if my product really has something valuable to offer, trying it won't be worthless.
My suggestion more around understanding generally if there is potential demand which is what I thought your question was. I wouldn't promote your product there directly.
What you are saying is right indeed, but most of the problems that you talked about are people problem rather than having to do something with the tag of "labs". Otherwise how different is the failure of two consecutive products by the same team under the same "labs" banner from the failure inside two entirely different banner!