Hacker News new | past | comments | ask | show | jobs | submit login

I would prefer:

    #define assert(x) ((void)(x))
Most of these would cause an immediate flood of hints and / or warnings; but an assert that did nothing but evaluate its argument could lurk for some days, especially if the developers in question don't spelunk with a debugger very often.



I don't get it. In (non-wildly-buggy) code evaluating the argument to assert() is required to (a) succeed, and (b) have no side effects, so what would such a definition accomplish?


Assumptions:

* asserts are used in a defensive programming style to state invariants

* the enemy developers will be reasoning about the code assuming that asserts which didn't fire were true

* when debugging, they'll read the code and prematurely dismiss valid theories as to why the bug exists based on what they read in the assertions


Sure, but I don't think this matches the problem statement -- if the product is within a week or two of shipping, it should be long past the point where developers are tracking down bugs by using asserts to exclude possible code paths.

If the problem was to slow down early development I'd absolutely agree with you.


An easier way to define the same thing:

#define NDEBUG 1


No it isn't. That will discard the assert expression. What I wrote will still evaluate it.


You're right. Sorry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: