Hacker News new | past | comments | ask | show | jobs | submit login

I don't think this will make reuse much worse even in a general programs, as long as there is a good division between expected errors (file not found) and unexpected (invalid operation code). In fact, there are a lot of ignorable errors in Unix which IMHO should have been raising a fatal signal instead, as this would substantially improve general software quality.

As an example: trying to close() invalid FD is a a non-fatal error which is very often ignored. But it is actually super dangerous, especially in multi-threaded apps: closing wrong fd will harmlessly fail most of the time, but 1% of time you'll close a logging socket or a database lock file or some unrelated IPC connection.. That's how you get unreliable software everyone hates.




I agree with you in general.

However, in your example it’s the kernel that is deciding the request (message) is bad. In Hubris it is the message receiver.

This is a bit contrived, but imagine you’re receiving some stringly typed data from an external source and sending a message to a parsing task that either throws or messages you back with a list of some type t. Maybe it is returning ints and you as the client know that if something isn’t parsable as an int you want it to treat it as a ‘0’ because you’re summing the list. Somewhere else you want to call the same task, but you want strings that can’t be parsed to be treated as ‘1’ unless they can’t be parsed due to overflow (in which case you rethrow) because you’re taking the product.

In some situations it’s natural for the client to know more than the server about how to handle errors. With this nuke from orbit model, there’s some forced coupling between the client and server (mutual agreement over what causes a REPLY_FAULT).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: