Hacker News new | past | comments | ask | show | jobs | submit login

Great suggestion! I've started implementing that, and now I only have 30 running processes and 60 pipes for my chat bot. That's much more reasonable, but I still have a small issue: each plug-in FIFO blocks after each request! The plug-ins that perform web requests block everybody else from marking a request to the same plug-in while the previous request is being processed... And some of those requests can take several seconds, or even timeout!

This is turning out to be much more complicated than I had originally expected. With the UNIX way saying of "using the right tool for each job", I kind of wish there was a tool to build scalable network programs... Oh well, until that comes along, I'm going to stick to Bash.




Can't you read the output (from the plugin) in the background?


In bash? Good luck. Your main problem is that by creating two FIFO's per plugin daemon in your design, any long-running request in a plugin will hold up other requests to that plugin. Unless you go ahead and implement an async I/O stack or threaded architecture inside each plugin. (Starting to sound more like Node.js...)

He's using chipper sarcasm to illustrate the point that rewriting things "the Unix way" may produce a great troll fork on Github, but more up-front engineering is absolutely justified if you anticipate the need to scale.

To get more to the point, Unix pipelines in shell scripts are nifty but absolutely not designed to scale in the ways that a typical networked server process does.

I happen to agree with him; if the only point of hubot was to be a campfire bot with a plugin architecture, you could probably accomplish that in 5 lines of Ruby that would never see the light of day in a production environment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: