Hacker News new | past | comments | ask | show | jobs | submit login

> when they can just as well leave the choice to somebody else, at exactly zero cost.

This is where the confusion lies. “Zero cost” here means “it’s a few lines of code, a few cpu cycles to handle it. What’s the big deal?”

But I’m not talking about a few lines of code or cpu cycles, I’m talking about developer time. Leaving the decision to someone else may take little no time to you, today, but might incur a steep cost on someone else’s time tomorrow. Even worse is when you, with deep domain knowledge, tempt a far less knowledgeable dev into “handling” an error at a time and in a place that will be difficult or impossible to handle, wasting their time.




You are still not getting this. It costs exactly zero more developer time to type "throw x" than to type "abort()". It places exactly zero demand on anybody else.

If somebody decides they want to catch the exception and try something else, that is totally their choice, and they can devote as much or as little time to getting their idea working as they like. If they get it working, good. If they give up first, that is fine too.


> You are still not getting this. It costs exactly zero more developer time to type "throw x" than to type "abort()". It places exactly zero demand on anybody else.

OTOH, it could be that you're just not getting this: Those exceptions that can't ultimately be handled will perhaps throw a (more or less informative) error message onto the screen and into a log file, but then they'll have to do a (more or less controlled) shutdown of the app.

One form of shorthand for this that I could see myself using is... "just crash". So perhaps that was what was originally meant: Maybe your "throw an exception for godssake!" is "just crash".


> You are still not getting this.

I understand perfectly well what you’ve said, and I don’t disagree with any of it. The point I’m trying to make (badly, apparently) is not that.

My point (and I believe the point of the linked article, and, I mistakenly thought, my original comment) is that decisions by a few can have enormous impacts on the productivity of many, including design decisions like “should we bother polishing the errors from these corner cases at the expense of delaying new feature work, or just let people report a bug if they run into a corner case and get on with new features?” Rui chose to group a whole swath of errors in lld into the “not worth polishing” category so they could move on with more important work.

My apologies that my point somehow came across as “always crash your program, never throw exceptions!” Thanks for your patience and taking the time to read generously and deeply and to really understand my point.


I apologize for my impatience. The original description really could plausibly be read to suggest he did not even check for bad input, and just assumed input was OK, maybe with assertions that would run in debug builds. It is true that not even coding sanity checks is less work than arranging to abort or throw, and marginally faster.

Producing faulty output is a more likely response to unchecked faulty input than crashing. It would have been better for the author to mention that, if true.


Common Lisp's condition system arguably proves your point.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: