Yeah I fell victim to that, too. Also having a timer counting down caused me to freeze up. I got about half the questions right immediately, and half after the time had ticked down so that my mind could relax and pay attention. I think I legitimately only got 2 wrong. And one of them I could argue that I found a different bug than the one they were looking for.
Completely agree, I got three wrong, but because I clicked the wrong spot, I only got 3 right, it's pretty frustrating.
The test is also pretty easy. I only know C and some Java and I figured most of the problems. The standout important questions to me were the memset and free ones.
Also, many quiz items are far away from the specialities of C++. It's stuff like duplicate expressions. Or a variable being subtracted from itself. Or a variable assigned to itself.
Startup idea - Generalize this in a gamified form as a vehicle to assess the caliber of programmers for potential employers. (Hiring is expensive - recruiter bonuses are 5 figures in tech, plus the opportunity cost of lost business if you don't have the capacity to handle growth).
But these are real bugs from real software, which in this case happen to be taken from a corpus of C++ code. The game specifically claims it's not about knowledge of C++!
I've gotten used to seeing walls of warnings/messages when compiling things from source.
It's almost a case of boy-who-cried-wolf at this point. You get lots of warnings, but it works!
And they're not always due to the programmer being negligent. I've had many cases where something compiles cleanly on one box, but throws warnings on another because of a slightly different version of a compiler or library.
When I compile something and it runs through the entire process with not a single warning, it's truly a "...whoa" moment.
Wow. When I worked as a developer, one thing I would always insist on (or push for when I was not senior enough to insist on things) was -Wall and -Werror. If your code has a warning, there is a reason for it and you should address it. Sometimes the warning is benign, so you address it with a thoughtful comment to the effect of "silence X warning - it's not a factor because Y". But at least address them! Reasons to be so anal on warnings:
1. Broken windows theory. A code base full of warnings signals that nobody cares about anything. People will get better about finding and caring about the big things when they are forced to also care about the little things.
2. When your project has a "wall of warnings" there are real bugs hiding in there. I guarantee it and would bet money on it. Here's your opportunity to fix them and you don't even need QA to point them out. Your compiler is literally telling you.
3. When things compile on one box but not another, that's a big red flag that you have something platform-specific or library version-specific that WILL bite you some time later. unsigned-signed and sizeof(int) mismatches are good examples of this.
The whole "it compiles and works therefore it's done" mentality is some serious "second-year programming class" level shit, but it's everywhere. Good projects, where people care and have disciplined attitudes, discourage it.
Adding -Werror to your build system sucks because future compilers implement new warnings making it very difficult to recreate new builds with new compilers, for instance when bisecting a bug.
Add it to your local build with "./configure CFLAGS=-Werror" or similar? That's great!
Also, no team below you can ever use -Wdeprecated, and no compiler team can ever test your project with a new compiler, because all of it will break your build.
Deprecation annotations are incredibly helpful, many thanks.
This is off the top of my head and likely clang specific:
-Werror // Treat all warnings as errors
-Wno-error=deprecated // Except deprecated warnings
-Wno-error=deprecated-implementation // And these ones too
The deprecated-implementation warning flag should likely be made part of the deprecated warning group, but it's not yet. See the clang source for a helpful TODO? comment.
Oh, that's interesting. What would be the method for handling this? Branch a known good commit, fix the warning, rebase everything from the future, then bisect?
Consider adding -Wextra. It adds a lot of warnings that will likely be bugs, but may throw more false positives than -Wall.
If you are willing to put up with more false positives and/or are willing to disable certain warnings, consider -Weverything.
Yeah. Unfortunately this is very true and relevant. Many projects have a huge number of warnings - not all of them are errors, which makes it hard to spot new non-obvious bugs when they are made.
This "game" only tells half the story about static code analysis. Sure it's very impressive that all those bugs have been found and they are indeed hard to spot. However what's actually important would be the percentage of false-positives.
I'd bet a weeks salary (seriously) that Coverity (yeah yeah it's expensive, whatever) will catch > 90% of these with maybe a 10% false positive, and Code Analysis in VS would hover around the 80% mark with maybe a 15% false positive (both for C++; for ANSI C, move that up to 95% and 90% accordingly).
ccc-analyzer/c++-analyzer for clang/llvm would probably hit the 90-ish mark for ANSI C too.
Do I get extra points for finding an exploit in their game?
Just disable JavaScript and reload the page, which will halt the countdown and display the solution. Then, when you've read what you need to know, re-enable JavaScript, reload the page, again, and then just click the place where the error is. Then click on the "Next Question"-button, disable JavaScript, again, and so on...
Wow, I found that surprisingly difficult. I noticed that when I read code, I optimize my attention for bugs _I_ make, which are often of a different nature than the bugs that this static analyzer finds. Clearly, I have to work on my bug awareness.
Interesting, but some of the questions are somewhat shallow or ambiguous. For example, referencing a variable x0 where y0 should have been used, etc.
Although I do have to say, it's certainly kind of alarming, if not amusing, to see trivial bugs that slipped through the cracks in very many massively popular pieces of software.
Well, there's L1 norms, Lagrange point L1, an L1 visa, etc. In any of those contexts, it's an excellent variable name.
There's also the case of short little utilities. For something like strncmp (or at least my slightly naive version), I'd rather see code like
int strncmp(const char* s1, const char* s2, int n)
{
for(; n>0; s1++, s2++, n--) {
if (*s1 < *s2) {
return -1;
} else if (*s1 > *s2) {
return 1;
}
}
return 0;
}
than have code where someone attempted to come up with long "meaningful" names for each variable. There isn't really meaning here beyond "pointer_to_string1" and "pointer_to_string2", and longer names here do nothing but add visual clutter.
I think l1 is considered uniquely onerous because "l" and "1" render similarly in many fonts. That is, the complaint is not about x1, x2, x3, etc., it's about that specific combination.
That's a luxury! Try finding a bug in an obfuscated binary, without access to the source code or even header files, that implements anti-debugger techniques to crash or behave differently upon first detection of a debugger.
What does that have to do with static analysis of C++ source code? Of course that's difficult, but the quiz is also purposefully made more difficult than it would be in real life.
On the other hand the test is significantly easier than it would be in real life, because you know that there's a bug somewhere in the few lines it's showing you.
The timer on the page put me under more stress than I'd like to admit. Maybe this is quite a good simulation of code review reality though, where you try cover lots of code in a small amount of time.
Many of the bugs the quiz wants me to find would be pointed out by my IDE (but I'm not taking the quiz in my IDE...)
Also, lots of this code is terrible. Simply adhering to a reasonable style could highlight or eliminate bugs. For example, "variable > other_value - 1"; let's get that "- 1" outta there and use >= instead.
Anyway, although I suggest these would be helped by a modern IDE and some style guides, I'd concede that since this is open source code, it's probably older than modern IDEs...
The whole point of this is that static analysis can find bugs that people have missed. If your IDE finds most of these, it just means that its static analysis is mostly as good as the Intel one.
"This code is terrible" is not a good argument. Yes, it's buggy code. However, we know most programmers write bugs. This is an argument in favor of static analysis (and yes, your IDE is a form of that).
These are mostly mistypings caused by carelessness which occasionally play oddly with C/C++ syntax, not the kind of logical bugs which plague most software.
I don't understand. These are real bugs from actual software. Not sure if the test is randomized; I got stuff like a != a and multiplications by (y - y). Also C++-specific stuff like improper use of sizeof and references. How are these not the kind of logical bugs that plague most software?
They are from actual software but that doesn't make them less trivial – assignment instead of equality, unbalanced parentheses which invoke the comma operator, mistyped names of variables in repetitive expressions which do not stand out due to the idiotic indentation, not knowing that arrays decay to pointers when passed to functions.
Those things can indeed cause a lot of harm but they are far less frequent in codebases not written by monkeys, compared to issues like undefined behaviour due to misunderstanding the language semantics, integer overflow, not checking whether a call has returned an error, etc.
Again, I don't understand the argument. Actual software is full of "trivial" bugs. That they are trivial doesn't seem to stop programmers from writing them! I don't know about you, but in my experience I find plenty of inexperienced programmers (and oddly, even some supposedly experienced ones) who write poorly indented software with repetitive expressions. Sometimes the verbosity of the programming language -- or the lack of a good static checker -- even encourages this.
Saying "don't write code with trivial errors" is almost like saying "don't write bugs". We all know how well that works. We must deal with the fact trivial bugs occur, and that they occur often; anything else is like hiding your head in the sand.
This is an argument in favor of static code analyzers.
My argument does not advocate the neglection of these issues in any way. It's simply that such bugs are the low-hanging fruit and are far less frequent than much subtler issues which can have the same adverse effect.
"Test does not support mobile devices. It is very easy to miss with finger. We are working on new version of tests with better mobile devices support, new problems to solve etc. However, it is not implemented yet."
Since you have to select the bug in the code with the mouse, it's unusable on touchscreen mobile devices.
I got 13 points (not counting as incorrect those where I knew the answer, but the system wouldn't accept the token I clicked on) and I don't think it was that hard. My objection is that if you run static analysis on code like this:
You are doing it wrong. You first need to figure out how to write code in a more readable fashion and not allow ugly things like the above in the codebase.
This looks like legacy code from before compilers had good loop unrolling. If people have large code bases like these to maintain, they'll probably be happy to find the bugs now, and worry about rewriting it all later.
I wouldn't be surprised if this code is actually auto generated from a macro, higher level script, or something. I do driver work and this kind of thing is very common in that space.
10/15. I guess that's not bad for someone who never used C++ before.
This test gives the user a slightly unfair advantage, because you know where to look. If I was given the whole of the source file for any of these examples, I would probably never have found any bugs.
It is good to be reminded of the horrors of C++. I'm happy to be working with C# and have Resharper enabled. It easily catches all of the bugs in this test.
Cool, didn't notice they are doing C# as well. It would be very interesting to try it. I wonder what the costs are though - the quotes are hidden from their site.
Still, in reality, knowing that your code has a bug is just one part of the story. Having huge legacy code base usually means you've got to fix the bugs as you go about your business implementing features/fixing other bugs. You cannot simply spend a year fixing bugs no-one actually cares about (because software was working well enough before).
So in this drive-by scenario IDE-integrated tool is much more useful than some offline report-generating tool. You see a warning live in your editor - you fix the issue - you move on. Very low effort and effective in the long run.
e.g. in "if a.length != a.length" the "correct" token is the first "a". Should really be anything in the whole expression.