At university I worked with small unmanned aircraft. We had a crash due to a piece of code with this form:
int landing_flag;
...
if( landing_flag ) {
do_landing();
}
It was C code that pre-dated a boolean type. A single corrupt data packet in a wireless link made landing_flag == 2345923 (some arbitrary large value) and thus the landing routine was triggered mid-flight.
We changed every instance of if( flag ) to if( flag == specific_flag_value ) to ensure that particular bug didn't rear it's ugly head again. I keep doing that now.
So, you changed a somewhat reproducible bug into one that has about a four billion times lower chance of occurring?
Good luck to the poor chap who will have to figure out what happened when that bug hits.
Also, you introduced a new error condition: a corrupt packet that should set a value of 1, but arrives as a value of 2 will not initiate the landing routine.
The right thing to do, IMO, is to prevent corrupt data packets from doing such stuff. Checksum the packets or, better yet, checksum and encrypt them. That prevents the enemy from taking over your plane.
Finally, I do not see how 'no proper bool' is relevant here. If the packet contained a single bit indicating the value of the flag, it still could get corrupted.
> So, you changed a somewhat reproducible bug into one that has about a four billion times lower chance of occurring?
Yes. You seem to imply that's a bad thing?
> Also, you introduced a new error condition: a corrupt packet that should set a value of 1, but arrives as a value of 2 will not initiate the landing routine.
A corrupt packet should not do anything, so that's good, not an error. We do not want the landing routine to be accidentally triggered in flight. Missing a valid packet is much better than triggering on an invalid packet. (It's a UDP protocol, so the entire system is designed to handle missed packets. Ground station re-sends commands until positive acknowledgement is received from the aircraft)
> The right thing to do, IMO, is to prevent corrupt data packets from doing such stuff. Checksum the packets or, better yet, checksum and encrypt them. That prevents the enemy from taking over your plane.
Exactly right. We were already using a checksum in the datalink, and the corrupted packet that caused the crash passed the checksum as valid! During the post analysis of the crash, I discovered that it was using an 8-bit XOR checksum implemented years earlier. 8-bit XOR is ok for detecting single bit errors, but is not good at detecting burst errors -- it does not detect ~12% of highly corrupted packets. I also updated the system to use a significantly more robust checksum after that incident.
It was a bad thing, the way you described it. Now that I know you also fixed the root cause of the problem, I can see it as an additional line of defense.
I think I wouldn't add it, though. Time is better spent on tooling that checks the variable doesn't get an incorrect value.
Like you said, an additional line of defense. For a web application there's no need. For a flight critical application where a failure means you just lost a few $100k worth of hardware, then I'll take every measure possible.
Eh, I can't say that I have any love for the ternary if operator. It is an ugly construct made necessary by the other flow control constructs being statements. Scala lets you just use if/else like that, the result is much clearer at the cost of 4 or so characters.
If I were writing something, and I really wanted to make it clear that I was testing a boolean to be true, I might write that. Then I can be absolutely certain the person reading it in 5 years won't misread it.
Actually, the more I think about it, the more I'd be inclined to do that in the false case ( ie if(aBoolean == false)... rather than if(!aBoolean).. ) because I worry that it's too easy to skip over the '!' - and I personally read that as "not aBoolean" rather than "aBoolean is false"...
I'll bet it complies to the same thing anyway, so it's just about readability at this point.
People have different preferences when it comes to coding styles. There are some that are subjective (balance between clarity vs. simplicity), and there are idiotic styles such as
!Boolean.FALSE.equals(aBoolean), or if (((((a == true) == true) == true) == true) == true).
The point is that using (a == true) instead of a is idiotic. Now, there are good semantic reasons to use (a == true) in some languages, but we're not talking about that, we're talking purely about style. It's no different from doing (a && true) or (i + 0) or (f * 1.0) when a, i and f would suffice. Not only is it longer, it forces you to stop and think, was there a good reason why this was being done? Can a hold truthy values that the programmer's trying to exclude? Otherwise, it's a sign of a programmer that doesn't know what he's doing - I've spent a decent amount of time teaching and grading and you see this a lot in first-year school assignments, rarely in high-quality production code. In almost all cases, it's a holdover from someone going in their head "if a is true" (because "if a" doesn't read well in human languages) and writing that out, not some kind of conscious effort to make the code more readable.
> In almost all cases, it's a holdover from someone going in their head "if a is true" (because "if a" doesn't read well in human languages) and writing that out,
It's interesting you admit that people are doing it "the long way" because that's how it naturally flows out of their head.
Doesn't it make sense that it would naturally flow into their head the same way?
What's the goal here - write very tight, concise code that fits some arbitrary standard of "correct"?
Or to write code that flows out of and into people's heads easily?
I was describing first-year students who are reading code as though it's written in a foreign language, not anyone with some degree of competence. For obvious reasons, professionals should not be internally verbalizing as they write or read code, any more than a fluent French speaker should be internally translating from English as they speak French.
Potentially, I can construct an independent statement to adequately convey my innervation to your statement of factual basis, that while being grammatically correct, purposefully adds needless complexity to the original statement of intent I am adequately attempting to convey.
If it's somehow clearer to say if (x == true) instead of if (x), why isn't it also clearer to say if ((x == true) == true) instead of merely if (x == true)? And so on?
There's grey zone and there's being obtuse. We're not talking about fancy tricks here - the clearest way to retrieve the truth value of a boolean variable is to refer to it directly, not comparing it for equality against a constant.
Read my original comment. It would very much depend on context, and, as I said, I'd be more inclined to do it in the false case.
It's funny, you still seem to think there is a "right and wrong" here, and you can't see that coding style is just like writing a poem - each individual will do things a little differently.
What context does it depend on? Where is it reasonable to say (x == true) instead of x?
Edit: your original post said "If I were writing something, and I really wanted to make it clear that I was testing a boolean to be true, I might write that." What does this even mean? Simply writing x isn't really really clear, so you write (x == true) to make it extra clear? if (x) is testing boolean to be true and it's crystal clear. What else can it mean?
I personally think it depends on the complexity of the current function, and the complexity of the variable x. i.e. if x has been read and written to a bunch of times in the function already, it might add clarity to be very clear that you mean "if x is false".
The clearest way I can think of to write that, that's hopefully less prone to misinterpretation is if(x == false) rather than if(!x)
Like I said, these are just my personal opinion and an expression of how I code (and likely things that I find difficult or often misread when reading the code of others)