Hacker News new | past | comments | ask | show | jobs | submit login

tolerance should actually go down since the errors help cancel each other out.

reference: https://people.umass.edu/phys286/Propagating_uncertainty.pdf

disclaimer: it will be a relatively small effect for just two resitors

aleph's comment is also correct. the bounds they quote are a "wost-case" bound that is useful enough for real world applications. typically, you won't be connecting a sufficiently large number of resistors in series for this technicality to be useful enough for the additional work it causes.




Note that tolerance and uncertainty are different. Tolerance is a contract provided by the seller that a given resistor is within a specific range. Uncertainty is due to your imprecise measuring device (as they all are in practice).

You could take a 33k Ohm resister with 5% tolerance, and measure it at 33,100 +/- 200 Ohm. At that point, the tolerance provides no further value to you.


It’s not nearly that simple:) Component values change with environmental factors like temperature and humidity. Resistors that have a 1% rating don’t change as much over a range of temperatures as 5% or 10% components do. This is typically accomplished by making the 1% resistors using different materials and construction techniques than the lower tolerance parts. Just taking a single measurement is not enough.


If values are normally distributed, random errors accumulate with the square root of the number of components. Four components in series have 2x the uncertainty over all, etc, but if you divide that double uncertainty by four times the resistance, it's half the percentage uncertainty as before. (I avoid using the word "tolerance" because someone will argue whether it really works this way)

In reality, some manufacturers may measure some components, and the ones within 1% get labeled as 1%, then it may be that when you're buying 5% components that all of them are at least 1% off, and the math goes out the window since it isn't a normal distribution.


I wonder about the effect of different wiring patterns. For example you can can combine N^2 resistors in N parallel strips of N resistors in serie.

I expect that in this case the uncertainty would decrease


In the article's example, I'd prefer 2 resistors in parallel. That way result is less dramatic if 1 resistor were to be knocked off the board / fail.

Eg. 1 resistor slightly above desired value, and a much higher value in parallel to fine-tune the combination. Or ~210% and ~190% of desired value in parallel.

That said: it's been a long time since I used a 10% tolerance resistor. Or where a 1% tolerance part didn't suffice. And 1% tolerance SMT resistors cost almost nothing these days.


This might be why pretty much all LED lightbulbs/fixtures have two resistors in parallel. Used for the driver chip control pin, that sets the current to deliver via some specific resistance value.

It's always a small and a large resistor. The higher this control resistance, and the lower the driving current.

Cut off the high value resistor to increase the resistance a bit. In my experience this often almost halves the driving current, and up to 30% of the light output (yes, I measured).

Not only most modern lights are too brights to start with anyways, this fixes the intentional overdriving of the LEDs for planned obsolescence. The light will last pretty much forever now.


Iterating either of

f(x) = 3/(1/x + 1/110 + 1/90)

g(x) = 1/(1/(3x) + 1/(3110) + 1/(3*90))

Seems to show that 100 is a stable attractor.

So I will postulate without much evidence that if you link N^2 resistors with average resistance h in a way that would theoretically give you a resistor with resistance h you get an error that is O(1/N)


> tolerance should actually go down since the errors help cancel each other out.

Complete nonsense. The tolerance doesn't go down, it's now +/- 2x, because component tolerance is the allowed variability, by definition, worst case, not some distribution you have to rely on luck for.

Why do they use allowed variability? Because determinism is the whole point of engineering, and no EE will rely on luck for their design to work or not. They'll understand that, during a production run, they will see the combinations of the worst case value, and they will make sure their design can tolerate it, regardless.

Statistically you're correct, but statistics don't come into play for individual devices, which need to work, or they cost more to debug than produce.


The total tolerance is not +/- 2x, because the denominator of the calculation also increases. You can add as many 5% resistors in series as you want and the worst case tolerance will remain 5%. (Though the likely result will improve due to errors canceling.)

For example, say you're adding two 10k resistors in series to get 20k, and both are in fact 5% over, so 10,500 each. The sum is then 21000, which is 5% over 20k.


> Statistically you're correct,

The Central Limit Theorem (which says if we add a bunch of random numbers together they'll converge on a bell curve) only guarantees that you'll get a normal distribution. It doesn't say where the mean of the distribution will be.

Correct me if I'm wrong, but if your resistor factory has a constant skew making all the resistances higher than their nominal value, a bunch of 6.8K + 6.8K resistors will not on average approximate a 13.6K resistor. It will start converging on something much higher than that.

Tolerances don't guarantee any properties of the statistical distribution of parts. As others have said, oftentimes it can even be a bimodal distribution because of product binning; one production line can be made to make different tolerances of resistors. An exactly 6.8K resistor gets sold as 1% tolerance while a 7K gets sold as 5%.


> Tolerances don't guarantee any properties of the statistical distribution of parts.

That's incorrect. They, by definition, guarantee the maximum deviation from nominal. That is a property of the distribution. Zero "good" parts will be outside of the tolerance.

> It will start converging on something much higher than that.

Yes' and that's why tolerance is used, and manufacturer distributions are ignored. Nobody designs circuits around a distribution, which requires luck. You guarantee functionality by a tolerance, worst case, not a part distribution.


> The Central Limit Theorem (which says if we add a bunch of random numbers together they'll converge on a bell curve) only guarantees that you'll get a normal distribution. It doesn't say where the mean of the distribution will be.

That's kind of overstating and understating the issue at the same time. If you have a skewed distribution you might not be able to use the central limit theorem at all.


>If you have a skewed distribution you might not be able to use the central limit theorem at all.

The CLT only requires finite variance. Skew can be infinite and you still get convergence to normality ... eventually. Finite skew gives you 1/sqrt(N) convergence.


If you're going to say "Complete nonsense." you shouldn't get the calculation wrong in your next sentence.


Very true, I was writing as absolute value, not % (magnitude is where my day job is). My point still stands: it is complete nonsense that tolerance goes down.


They said it "should" go down, but that another comment saying the worst case is the same is "also correct".

I do not see any "complete nonsense" here. I suppose they should have used a different word from "tolerance" for the expected value, but that's pretty nitpicky!


I'm sorry, but it's incorrect, as stated. It's a false statement that has no relation to reality, with the context provided.

Staying the same, as a percentage, is not "going down". If you add two things with error together, the absolute tolerance adds. The relative tolerance (percentage) may stay the same, or even reduce if you mix in a better tolerance part, but, as stated, it's incorrect.

It's a common misunderstanding, and misapplication of statistics, as some of the other comments show. You can't use population statistics for low sample sizes with any meaning, which is why tolerance exists: the statistics are not useful, only the absolutes are, when selecting components in a deterministic application. In my career, I’ve seen this exact misunderstanding cause many millions of dollars in loss, in single production runs.


It only stays the same if you have the worst luck.

> You can't use population statistics for low sample sizes with any meaning

Yes you can. I can say a die roll should not be 2, but at the same time I had better not depend on that. Or more practically, I can make plans that depend on a dry day as long as I properly consider the chance of rain.

> In my career, I’ve seen this exact misunderstanding cause many millions of dollars in loss, in single production runs.

Sounds like they calculated the probabilities incorrectly. Especially because more precise electrical components are cheap. Pretending probability doesn't exist is one way to avoid that mistake, but it's not more correct like you seem to think.


I've repeatedly used a certain words in what I wrote, since it has incredible meaning in the manufacturing and engineering world, which is the context we're seeking within. It's a word that determines the feasibility of a design in mass production, and a metric for if an engineer is competent or not: determinism. That is the goal of a good design.

> It only stays the same if you have the worst luck.

And, you will get that "worst luck" thousands of times in production, so you must accommodate it. Worst off, as others have said, the distributions are not normal. Most of the << 5% devices are removed from the population, and sold at a premium. There's a good chance your components will be close to +5% or -5%

> Yes you can. I can say a die roll should...

No you cannot. Not in the context we're discussing. If you make an intentional decision to rely on luck, you're intentionally deciding to burn some money by scrapping a certain percentage of your product. Which is why nobody makes that decision. It would be ridiculous because you know the worst case, so you can accommodate it in your design. You don't build something within the failure point (population statistics). You don't build something at the failure point (tolerance), you make the result of the tolerance negligible in your design.

> Sounds like they calculated the probabilities incorrectly.

Or, you could look at it as being a poorly engineered system that couldn't accommodate the components they selected, where changing the values of some same-priced periphery components would have eliminated it completed.

Relying on luck for a device to operate is almost never a compromise made. If that is a concern, then there's IQC or early testing to filter out those parts/modules, to make sure the final device is working with a known tolerance that the design was intentionally made around.

Your perspective is very foreign to the engineering/manufacturing world, where determinism is the goal, since non-determinism is so expensive.


> If you make an intentional decision to rely on luck, you're intentionally deciding to burn some money by scrapping a certain percentage of your product. Which is why nobody makes that decision.

Now this is complete nonsense. Lots of production processes do that. It depends on the cost of better tooling and components, and the cost of testing.

And... the actual probabilities! You're right that you can't assume a normal distribution. But that wouldn't matter if this was such a strict rule because normal distributions would be forbidden too.

Determinism is a good goal but it's not an absolute goal and it's not mandatory. You are exaggerating its importance when you declare any other analysis as "complete nonsense".

> since non-determinism is so expensive.

But your post gives off some pretty string implications that you need a 0% defect rate, and that's not realistic either. There's a point where decreasing defects costs more than filtering them. This is true for anything, including resistors. It's just that high quality resistors happen to be very cheap.


> Lots of production processes do that.

Please remain within the context we're speaking in: final design not components. When manufacturing a component, like a resistor or chip, you do almost always have a normal distribution. You're making things with sand, metal, etc. Some bits of crystal will have defects, maybe you ended up with the 0.01% in your 99.99% purity source materials, etc. You test and bin those components so they fall within certain tolerances, so the customer sees a deterministic component. You control the distribution the customer sees as much as possible.

Someone selecting components for their design will use the tolerance of the component as the parameter of that design. You DO NOT intentionally choose a part with a tolerance wider than your design can accommodate. As I said, if you can't source a component within the tolerance you need, you force that tolerance through IQC, so that your final design is guaranteed to work, because it's always cheaper to test a component than to test something that you paid to assemble with bad parts. You design based on a tolerance, not a distribution.

> Determinism is a good goal but it's not an absolute goal and it's not mandatory.

As I said, choosing to not be deterministic, by choosing a tolerance your design can't accomodate is rare, because it's baking malfunction and waste into the design. That is sometimes done (as I said), but it's very rare, and absofuckinglutly never done with resistors tolerance selection.

> But your post gives off some pretty string implications that you need a 0% defect rate, and that's not realistic either.

> There's a point where decreasing defects costs more than filtering them.

No, defects are not intentional, by definition. There will always be defects. A tolerance is something that you can rely on, when choosing a component, because you are guaranteed to only get loss from actual defects, with the defects being bad components outside the tolerance. If you make a design that can't accommodate a tolerance that you intentionally choose, it is not a defect, it's part of the design.

0% defect has nothing to do with what I'm saying. I'm saying intentionally choosing tolerances that your design can't accommodate is very very rare, and almost always followed by IQC to bin the parts, to make sure the tolerance remains within the operating parameters of the design.

I feel like this has lead to circles. I suggest re-reading the thread.

Maybe you could give an example where you see this being intentionally done (besides Wish.com merchandise, where I would question if it's intentional).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: