Hacker News new | past | comments | ask | show | jobs | submit login
Schneier on Security: Software Problems with a Breath Alcohol Detector (schneier.com)
89 points by hko on May 14, 2009 | hide | past | favorite | 19 comments



The analysis sounds suspect to me.

2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on. There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings.

The code is computing an exponentially weighted mean. Read that last sentence of the quote again, the analysis has it backwards. The last sample carries more weight, not the first.

Now, type "uptime" at your unix prompt. Those last three values are computed the same way and have been for decades. (There are three different weighting factors used in them instead of the 1:1 implied in the text here.)

The exponentially weighted mean is useful when you care more about the most recent values and when processor resources are highly constrained. It may be what was intended, or maybe not. Generally you would use a weighting factor to make the earlier factors not fade into oblivion as fast as these do, but I'm not going to take the word of someone who can't correctly describe the algorithm in his report.

And the bit about turning off the illegal opcode interrupt... the premise is that some sort of failure would alter the program memory in such a way that one of the opcodes became illegal, yet the program would continue to function but produce erroneous results. I'd have to say the probability of this is vanishingly small, in fact, given valid opcode density for microprocessors, much smaller than an instruction being mutated to a legal opcode that somehow allowed the program to still run but produce erroneous results.

I guess I should complete with my doubts about 3. Just because the A/D reads 12 bits doesn't mean you have 12 bits of data. If the 8 low bits are noise there is no information loss in dividing by 256. You have to understand the machine to know if this is a problem.


And the bit about turning off the illegal opcode interrupt

I agree with your other analysis, but I have to disagree here. One of the basic tenets of embedded systems design is that ALL interrupts should be handled, even if the handler is just to say "Hey I processed an interrupt that should never fire."

Given that a system in the wild can be subjected to conditions never encountered in an office or a lab (e.g., a police radio transmitting 2 inches away while this thing is measuring someone's Blood Alcohol Content) you simply can't predict how the processor will behave. That's why you make sure that things like the Watchdog and Illegal Instruction interrupts and resets are properly handled.

That this device took these shortcuts would immediately cause me to suspect the rest of the design. If I were the auditor (I would love to get into this kind of work, BTW), I'd start digging deeper right away.


While I am for software transparency when it matters, I do have to say I'm much less excited about the software being gone over by lawyers (or programmers in the pay of lawyers).

Give me a page of C code, and I can find ten faults. I can complain about variable naming schemes, I can complain about indentation I don't like. I can complain even if it's actually my preferred indentation, because you won't even know better. I can always complain about architecture because there are always pros and cons, which means I can play up the cons and ignore the pros, along with ignoring the fact that we have to use some architecture and "perfect" was never on the table. I can say this other architecture should have been used instead, and get you involved in a battle of pro and con analysis that is a perfectly good engineering discussion but will sound like dissembling on the witness stand. I can complain about the bug fix for an issue that I can't imagine how it comes up, but came up in testing. I can, basically, complain all day long, even if it is literally the best C code ever written.

Now, I grant that history suggests I'm unlikely to encounter the best C code ever written in one of these contexts, but my point is that since nothing can survive lawyer scrutiny, lawyer scrutiny is actually information-free.

The solution to "Not having any software standards" can't be "Requiring absolute, unattainable perfection", because the only sane response to that is to stop writing software.

The real question is not "Is this software perfect?", but "Did it function correctly?" And I'd be a lot more comfortable if somebody took the source code and the hardware and actually showed a case where it is wrong by physically producing that case, and not just theorizing about how the software might go wrong.

All that said, for all I know this thing's a pile of crap. Certainly if I had to lay money, that's the way I'd bet. My point is more that this writeup doesn't meet my standard for determining that it's a pile of crap, and as much fun as it may be to pile on law enforcement, if the price is letting the "lawyerly-perfection" standard pass without comment, that price is not worth paying!


Bill of Rights, Sixth Amendment:

"In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial...and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him..."


The real question IMHO is neither "Is this software perfect" nor "Did it function correctly?", it is "is the software quality sufficient for the risks associated with failure?"

What are the bad events resulting from failure? I see mainly people being wrongly punished (and suing) and people wrongly not being removed from of their cars when actually drunk.

On the other hand, the source reports "incomplete verification of design, and incomplete “white box” and “black box” testing". That sounds like the "well it mostly works I guess" level. To me, that's insufficient.


> The exponentially weighted mean ... may be what was intended, or maybe not

I can't think of any reason this would be intended. Proper method would be computing the mean and variance, to check if the sensor is behaving erratically.

While turning off the watchdog isn't a bona fide error, it definitely shows a lack of robustness in checking design assumptions.

> Just because the A/D reads 12 bits doesn't mean you have 12 bits of data

If 8 LSBs are junk, I'd say the analog system was also negligently designed.


There are some nice things about the filter in question. It's very memory efficient--you just need to keep an accumulator with the weighted sum of samples. You get an output with each input sample, so it's a good fit for continuous signals. You can pick the coefficients so that the math is shift and add, something handy with old and slow CPUs.

I don't know why they throw away so much of the ADC resolution. I suspect that the sensors are the weak link, not the analog conditioning circuitry or the ADC. Still, 4 bits is a pretty crummy sensor.

I see some irony in that if they'd written it in assembly language instead of C, the outside analysis probably wouldn't have found nearly as many of the bugs and other issues to complain about.


A simple digital low pass filter certainly has uses, I just don't think any of them are applicable here. Averaging several values per reading certainly wouldn't tax the resources of even the smallest micros.

I was including the sensor in the 'analog system'. If full range is 0.2% BAC (optimistic), thats a step of over 0.01%! I'd expect more from a sensor that's supposed to perform "beyond a reasonable doubt".

I don't know if ASM would have really hindered analysis. Certainly it's easier to deliberately hide malicious code with ASM, but naive code is generally readable and I doubt the audit firm would have given up. Chances are an ASM programmer would have paid better attention to detail, too ;).


You are not the only one who found this analysis suspect. The Supreme Court of New Jersey did, too, when it decided the issue in march 2008. "Subject to certain conditions, the Court holds that the Alcotest is scientifically reliable and that its results are admissible in drunk driving prosecutions. State v. Chun 194 NJ 54 3-17-08" http://ezinearticles.com/?NJ-Supreme-Court-Holds-New-Alcotes...


If the readings are, in fact, averaged in the way described by the article, the device is useless and should not be allowed as evidence since such a scheme permits a single outlier reading to produce an erroneous result.


Computer Operating Properly interrupt: anyone else think this sounds like a National Semiconductor processor device? ISTR they had an interrupt labelled COP.

When was the last decade National made a processor, anyway?


Motorola called their watchdog module a COP in the 8 bit lines.

National still makes MPUs (http://www.national.com/appinfo/mcu/) and IP (http://www.national.com/analog/compactrisc/architecture)

I've never seen a natsemi micro in the wild, however.


This is an excellent lesson in the security problems inherent in trusting proprietary software

This sentence made my day, given who is using it and how often it's being used.


This should be applied to SaaS as well: something needs to be done about GPL parasites hiding "in the cloud".

If you're building on top of GPL, users of your cloud software should be able to download your code, examine it and modify/deploy on their own servers.


Wait, this article had nothing to do with the GPL. The GPL is almost entirely unrelated to the notion of code audits and transparency. Lots of code licenses and business models allow for (or even promote) that approach.

Why did you bring the GPL into this?


You are right, I got carried away a bit here.

The article reminded me of reddit storing my password in plain text, which wouldn't go unnoticed if their code was open. [yes, I know they eventually released their code, which was very kind of them]


something needs to be done about GPL parasites hiding "in the cloud".

Companies and individuals that disregard the specifics of the licenses they are bound by are demonstrating a general lack of care and attention. We already have a solution to this problem (which is ironic given the license in question): the free market. As consumers become aware of the alternatives, they'll move towards companies and products that are more open.


It's not just about consumers, it's about programmers and their incentives as well. Torvalds' famous tit-for-tat doesn't work with SaaS - they get all the "tit" and keep "tat" to themselves.


Take a look at the agplv3. Its to fight exactly what you are saying. It 'closes' the hole in the cloud.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: