Hacker News new | past | comments | ask | show | jobs | submit login

It's not just the C spec you've got to watch. I saw a wonderful bug last week where the author hadn't spotted that write(2) returned a ssize_t, not a size_t (or didn't appreciate the difference), so was gleefully checking an unsigned variable for a -1 error result.



How did the bug manifest itself? You can store 0xffffffff(ffffffff) in a 32(64)-bit unsigned int, or a 32(64)-bit signed int. In the one case you'll get UINT_MAX, -1 in the other, but they should compare as equal. If you have -Wextra turned on, gcc will give a warning, though.

Here's some sample C code tested on a 32-bit Linux system:

  #include <stdio.h>
  
  int main(int argc, char *argv[])
  {
          unsigned int val1 = 0xffffffff;
  
          printf("val1 == -1: %d\n", val1 == -1);
  
          return 0;
  }
The result:

  val1 == -1: 1


It works for == -1, but not for < 0, which is a common way to check for error returns from UNIX calls.

Any decent compiler should warn that such a check is always false, but people don't always pay attention to that stuff....


C is my absolute favorite language, and as such, I learned a long time ago to pay very close attention to compiler warnings and Valgrind memory errors.


Sadly, a lot of people have never learned this valuable lesson, and happily build their code with hundreds or even thousands of warnings.


Got it in one :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: