GP is probably referring to the coefficient of variation, sigma/mu (standard deviation divided by mean), which normalises out for example the unit of measurement.
However, the 7 here is basically (x - mu)/sigma, so it is normalised (in that sense), anyway.
No, I think the problem (in principle) is that "standard deviation" has a special meaning for Gaussian distributions, which extend to infinity in both directions. A quantity that has a fixed range has most likely an asymmetric distribution, so one would expect an asymmetric error bar as well. But for a sigma<<the value, it's often not a big concern.
A good example is efficiency measurements. I can't count how often I have seen students say something like: Our detector is 99%+-3% efficient. Obviously a detector can't be 102% efficient.
> "standard deviation" has a special meaning for Gaussian distributions,
I have a master's degree in statistics and this is the first I'm hearing about it.
> Our detector is 99%+-3% efficient. Obviously a detector can't be 102% efficient.
In the absence of any other context I'd guess that they're using an approximation to a confidence interval that might be perfectly fine if the estimated value was nearer the center of the allowable range.
Well, special in two senses: First, in the canonical formula for Gaussians, sigma appears directly. For the case at hand, the confidence limits associated with 1 sigma, 2 sigma etc. in physics match exactly the area under the curve for a Gaussian integrated +- said sigma around the mean. That's were that connection actually comes from, and a physicist will always think: Within 1 sigma? That's 67%.
Hearing 99+-3% is a very strong indication that the person used an incorrect way to determine the uncertainty, most likely by taking the square-root of counts. But you are right, if the efficiency would be around 50%, that approximation is not so bad.
What's wrong with saying "Our detector is 99%+-3% efficient," if they are giving the output of some procedure that constructs valid confidence intervals? The confidence intervals will trap the true value 95% of time (or whatever the confidence level is). If it does what it promises to do, I don't see the problem.
Because a 99+3=102 is not a valid upper interval bound. You cannot have >100% efficiency for a detector. Also, your expected value cannot be centered. So maybe 99+1-3 is a valid range (but I would be very suspicious if the bound includes 100%)
I agree 102% is not a possible value for the efficiency of the detector. But if the confidence interval traps the true value of the efficiency 95% of the time upon repeated sampling, what's the problem? That's all that's required for a confidence interval to be valid. Some CI constructions do in general give intervals that include impossible parameter values, but if they contain the true value 95% of the time, there's no issue. The coverage guarantee is all that matters.
(One should not confuse a CI with a range of plausible values, in other words.)
Ok, true, in that sense, it's fine. However, in 100% of cases I have observed so far (and they were far too many), it means that the person who gives such a result used sqrt(counts) as the error estimate, and that's not correct -- not only for the upper bound, also for the lower bound.