Hacker News new | past | comments | ask | show | jobs | submit login

I still remember one of my favorite linux bugs was due to the NULL behavior.

It was roughly this

read-from-p;

if(p != NULL) write-to-p;

since read-from-p is undefined if p is null, gcc could (and did) legally optimize out the NULL check, so you could end up writing to NULL.

[edit] I noticed that this case of bug is actually mentioned in the regehr article that is linked.




I had a similar bug. Using a tracing framework of macros, I had something like:

    #define TRACE_FOO(foo_ptr)  TRACE_INT((foo_ptr)->x)

    ...

    TRACE_ENTER(TRACE_INT(x) TRACE_FOO(foo_ptr));
    if(!foo_ptr) return NULL;

    use foo-ptr
The NULL check was optimized out. The dereference of foo_ptr was hidden behind TRACE_FOO, which made it even harder to spot. I spent hours on that one :-)


If the compiler can optimize that away, what's the proper method for checking p?


Check p before using it. The issue is when you have code like the following, where the behavior of the first statement is only defined when the condition in the second is false:

    int x = *p;
    if (p == NULL) {
      // p can't be NULL without having triggered
      // undefined behavior in the first line, so
      // this code is removed by the compiler.
      return;
    }
    // ...
The fix is to stop dereferencing p before you know whether it's NULL or not:

    if (p == NULL) {
      // Moving the check before the dereference
      // avoids undefined behavior and resolves
      // the issue.
      return;
    }
    int x = *p;
    // ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: