Hacker News new | past | comments | ask | show | jobs | submit login

I also thought that was interesting. Also, wouldn't the tolerance be doubled when you add them in series? Or does it still average out to +/- 5%?



Fun fact is that afaik component values are often distributed in a bi-modal way because actually +-5% often means that they sorted out already the +-1% to sell as a different more expensive batch. At least it used to be that way. Wonder if it is still worth doing this in production. So I guess one could also measure to average things out otherwise the errors will stay the same relatively.


If you can measure them with that precision, would it make sense to sell them with that accuracy too? So if you tried to manufacture a resistor at 68kΩ +/- 20%, and it actually ended up at 66kΩ +/- 1%, couldn't you now sell it as an E192 product which according to TFA are more expensive?

Selling with different tolerances only makes sense to me if the product can't be reliably measured to have a tighter tolerance, perhaps if the low- quality ones are expected to vary over their life or if it's too expensive to test each one individually and you have to rely on sampling the manufacturing process to guess what the tolerances in each batch should be.


Resistors with worse tolerances may be made out of cheaper, less refined wire, which will vary resistance more by temperature. The tolerance and resistance is good over a temperature range. For more reading looking up "constantan".


Most resistors don't use wire, but some film of carbon (cheaper, usually the E12 / 5% tolerance parts) or metal (E24, or 1% and tighter tolerances) onto a non-conducting body. Wires mean winding into a coil, which means increased inductance.

I suspect in most cases the tolerances are a direct result from the fabrication process. That is: process X, within such & such parameters, produces parts with Y tolerance. But there could be some trimming involved (like a laser burning off material until component has correct value). Or the parts are measured & then binned / marked accordingly.

Actual wire is used for power resistors, like rated for 5W+ dissipation. Inductance rarely matters for their applications.


Accuracy depends on the technology used. Carbon comp tends have less accuracy then carbon film. And it's not true that higher accuracy is always better.

Some accurate resisters are essentially wound coils and have high inductance and will also induce and pick up magnetic interference. Stuff like that matters often a lot.


Thanks, always good to remember that the tolerance of a resistor is not just a manufacturing number but also defined over the specified temperature range.


Depends on where in the production line they are being tested. If they are tested after they've had their color bands applied, then you wouldn't be able to sell it as a 66kH since the markings would for a 68kH


The issue is probably volume. Very few applications need a resistor that's exactly 66kΩ, but a lot of applications need resistors that are in the ballpark of 68kΩ (but nobody would really notice if some 56kΩ resistors slipped in there).

For every finely tuned resonance circuit there are a thousand status LEDs where nobody cares if one product ships with a brighter or dimmer LED.


Unless the components are expensive, that proposition seems dubious. It's much more economical to take a process that produces everything within 12% centered on the desired value and sell it as ±20%. 100% inspection is generally to be avoided in mass production, except in cases where the process cannot reach that capability, chip manufacturing being the classic example. For parts that cost a fraction of a penny, nobody is inspecting to find the jewels in the rough.


Actually it seems to be really the case that multimodal distribution are rather the result of batches not having a mean. So it is rather the effect of systematic error [1]. I guess it is really a myth (we did low cost RF designs back in 2005 and had some real issues with frequencies not aligning die to component spread and I really remember that bi modality problem, but I guess okhams razor should have told me that it makes no economical sense)

[1] https://www.eevblog.com/2011/11/14/eevblog-216-gaussian-resi...


Yup, forever the reason for the trim pot.


I'm not sure where the line is, but at some point things like temperature a matter and so a low % resister cannot be high % that passes tests.


> Also, wouldn't the tolerance be doubled when you add them in series? Or does it still average out to +/- 5%?

Neither.

Let R_{1, ideal}, R_{2, ideal} be the "ideal" resistances; both with the same tolerance t (in your example t = 0.05).

This means that the real resistances R_{1, real}, R_{2, real} satisfy

(1-t) R_{1, ideal} ≤ R_{1, real} ≤ (1+t) R_{1, ideal}

(1-t) R_{2, ideal} ≤ R_{2, real} ≤ (1+t) R_{2, ideal}

Adding these inequalities yields

(1-t) (R_{1, ideal} + R_{2, ideal}) ≤ R_{1, real} + R_{2, real} ≤ (1+t) (R_{1, ideal} + R_{2, ideal})

So connecting two resistors with identical tolerance in series simply keeps the tolerance identical.


tolerance should actually go down since the errors help cancel each other out.

reference: https://people.umass.edu/phys286/Propagating_uncertainty.pdf

disclaimer: it will be a relatively small effect for just two resitors

aleph's comment is also correct. the bounds they quote are a "wost-case" bound that is useful enough for real world applications. typically, you won't be connecting a sufficiently large number of resistors in series for this technicality to be useful enough for the additional work it causes.


Note that tolerance and uncertainty are different. Tolerance is a contract provided by the seller that a given resistor is within a specific range. Uncertainty is due to your imprecise measuring device (as they all are in practice).

You could take a 33k Ohm resister with 5% tolerance, and measure it at 33,100 +/- 200 Ohm. At that point, the tolerance provides no further value to you.


It’s not nearly that simple:) Component values change with environmental factors like temperature and humidity. Resistors that have a 1% rating don’t change as much over a range of temperatures as 5% or 10% components do. This is typically accomplished by making the 1% resistors using different materials and construction techniques than the lower tolerance parts. Just taking a single measurement is not enough.


If values are normally distributed, random errors accumulate with the square root of the number of components. Four components in series have 2x the uncertainty over all, etc, but if you divide that double uncertainty by four times the resistance, it's half the percentage uncertainty as before. (I avoid using the word "tolerance" because someone will argue whether it really works this way)

In reality, some manufacturers may measure some components, and the ones within 1% get labeled as 1%, then it may be that when you're buying 5% components that all of them are at least 1% off, and the math goes out the window since it isn't a normal distribution.


I wonder about the effect of different wiring patterns. For example you can can combine N^2 resistors in N parallel strips of N resistors in serie.

I expect that in this case the uncertainty would decrease


In the article's example, I'd prefer 2 resistors in parallel. That way result is less dramatic if 1 resistor were to be knocked off the board / fail.

Eg. 1 resistor slightly above desired value, and a much higher value in parallel to fine-tune the combination. Or ~210% and ~190% of desired value in parallel.

That said: it's been a long time since I used a 10% tolerance resistor. Or where a 1% tolerance part didn't suffice. And 1% tolerance SMT resistors cost almost nothing these days.


This might be why pretty much all LED lightbulbs/fixtures have two resistors in parallel. Used for the driver chip control pin, that sets the current to deliver via some specific resistance value.

It's always a small and a large resistor. The higher this control resistance, and the lower the driving current.

Cut off the high value resistor to increase the resistance a bit. In my experience this often almost halves the driving current, and up to 30% of the light output (yes, I measured).

Not only most modern lights are too brights to start with anyways, this fixes the intentional overdriving of the LEDs for planned obsolescence. The light will last pretty much forever now.


Iterating either of

f(x) = 3/(1/x + 1/110 + 1/90)

g(x) = 1/(1/(3x) + 1/(3110) + 1/(3*90))

Seems to show that 100 is a stable attractor.

So I will postulate without much evidence that if you link N^2 resistors with average resistance h in a way that would theoretically give you a resistor with resistance h you get an error that is O(1/N)


> tolerance should actually go down since the errors help cancel each other out.

Complete nonsense. The tolerance doesn't go down, it's now +/- 2x, because component tolerance is the allowed variability, by definition, worst case, not some distribution you have to rely on luck for.

Why do they use allowed variability? Because determinism is the whole point of engineering, and no EE will rely on luck for their design to work or not. They'll understand that, during a production run, they will see the combinations of the worst case value, and they will make sure their design can tolerate it, regardless.

Statistically you're correct, but statistics don't come into play for individual devices, which need to work, or they cost more to debug than produce.


The total tolerance is not +/- 2x, because the denominator of the calculation also increases. You can add as many 5% resistors in series as you want and the worst case tolerance will remain 5%. (Though the likely result will improve due to errors canceling.)

For example, say you're adding two 10k resistors in series to get 20k, and both are in fact 5% over, so 10,500 each. The sum is then 21000, which is 5% over 20k.


> Statistically you're correct,

The Central Limit Theorem (which says if we add a bunch of random numbers together they'll converge on a bell curve) only guarantees that you'll get a normal distribution. It doesn't say where the mean of the distribution will be.

Correct me if I'm wrong, but if your resistor factory has a constant skew making all the resistances higher than their nominal value, a bunch of 6.8K + 6.8K resistors will not on average approximate a 13.6K resistor. It will start converging on something much higher than that.

Tolerances don't guarantee any properties of the statistical distribution of parts. As others have said, oftentimes it can even be a bimodal distribution because of product binning; one production line can be made to make different tolerances of resistors. An exactly 6.8K resistor gets sold as 1% tolerance while a 7K gets sold as 5%.


> Tolerances don't guarantee any properties of the statistical distribution of parts.

That's incorrect. They, by definition, guarantee the maximum deviation from nominal. That is a property of the distribution. Zero "good" parts will be outside of the tolerance.

> It will start converging on something much higher than that.

Yes' and that's why tolerance is used, and manufacturer distributions are ignored. Nobody designs circuits around a distribution, which requires luck. You guarantee functionality by a tolerance, worst case, not a part distribution.


> The Central Limit Theorem (which says if we add a bunch of random numbers together they'll converge on a bell curve) only guarantees that you'll get a normal distribution. It doesn't say where the mean of the distribution will be.

That's kind of overstating and understating the issue at the same time. If you have a skewed distribution you might not be able to use the central limit theorem at all.


>If you have a skewed distribution you might not be able to use the central limit theorem at all.

The CLT only requires finite variance. Skew can be infinite and you still get convergence to normality ... eventually. Finite skew gives you 1/sqrt(N) convergence.


If you're going to say "Complete nonsense." you shouldn't get the calculation wrong in your next sentence.


Very true, I was writing as absolute value, not % (magnitude is where my day job is). My point still stands: it is complete nonsense that tolerance goes down.


They said it "should" go down, but that another comment saying the worst case is the same is "also correct".

I do not see any "complete nonsense" here. I suppose they should have used a different word from "tolerance" for the expected value, but that's pretty nitpicky!


I'm sorry, but it's incorrect, as stated. It's a false statement that has no relation to reality, with the context provided.

Staying the same, as a percentage, is not "going down". If you add two things with error together, the absolute tolerance adds. The relative tolerance (percentage) may stay the same, or even reduce if you mix in a better tolerance part, but, as stated, it's incorrect.

It's a common misunderstanding, and misapplication of statistics, as some of the other comments show. You can't use population statistics for low sample sizes with any meaning, which is why tolerance exists: the statistics are not useful, only the absolutes are, when selecting components in a deterministic application. In my career, I’ve seen this exact misunderstanding cause many millions of dollars in loss, in single production runs.


It only stays the same if you have the worst luck.

> You can't use population statistics for low sample sizes with any meaning

Yes you can. I can say a die roll should not be 2, but at the same time I had better not depend on that. Or more practically, I can make plans that depend on a dry day as long as I properly consider the chance of rain.

> In my career, I’ve seen this exact misunderstanding cause many millions of dollars in loss, in single production runs.

Sounds like they calculated the probabilities incorrectly. Especially because more precise electrical components are cheap. Pretending probability doesn't exist is one way to avoid that mistake, but it's not more correct like you seem to think.


I've repeatedly used a certain words in what I wrote, since it has incredible meaning in the manufacturing and engineering world, which is the context we're seeking within. It's a word that determines the feasibility of a design in mass production, and a metric for if an engineer is competent or not: determinism. That is the goal of a good design.

> It only stays the same if you have the worst luck.

And, you will get that "worst luck" thousands of times in production, so you must accommodate it. Worst off, as others have said, the distributions are not normal. Most of the << 5% devices are removed from the population, and sold at a premium. There's a good chance your components will be close to +5% or -5%

> Yes you can. I can say a die roll should...

No you cannot. Not in the context we're discussing. If you make an intentional decision to rely on luck, you're intentionally deciding to burn some money by scrapping a certain percentage of your product. Which is why nobody makes that decision. It would be ridiculous because you know the worst case, so you can accommodate it in your design. You don't build something within the failure point (population statistics). You don't build something at the failure point (tolerance), you make the result of the tolerance negligible in your design.

> Sounds like they calculated the probabilities incorrectly.

Or, you could look at it as being a poorly engineered system that couldn't accommodate the components they selected, where changing the values of some same-priced periphery components would have eliminated it completed.

Relying on luck for a device to operate is almost never a compromise made. If that is a concern, then there's IQC or early testing to filter out those parts/modules, to make sure the final device is working with a known tolerance that the design was intentionally made around.

Your perspective is very foreign to the engineering/manufacturing world, where determinism is the goal, since non-determinism is so expensive.


> If you make an intentional decision to rely on luck, you're intentionally deciding to burn some money by scrapping a certain percentage of your product. Which is why nobody makes that decision.

Now this is complete nonsense. Lots of production processes do that. It depends on the cost of better tooling and components, and the cost of testing.

And... the actual probabilities! You're right that you can't assume a normal distribution. But that wouldn't matter if this was such a strict rule because normal distributions would be forbidden too.

Determinism is a good goal but it's not an absolute goal and it's not mandatory. You are exaggerating its importance when you declare any other analysis as "complete nonsense".

> since non-determinism is so expensive.

But your post gives off some pretty string implications that you need a 0% defect rate, and that's not realistic either. There's a point where decreasing defects costs more than filtering them. This is true for anything, including resistors. It's just that high quality resistors happen to be very cheap.


> Lots of production processes do that.

Please remain within the context we're speaking in: final design not components. When manufacturing a component, like a resistor or chip, you do almost always have a normal distribution. You're making things with sand, metal, etc. Some bits of crystal will have defects, maybe you ended up with the 0.01% in your 99.99% purity source materials, etc. You test and bin those components so they fall within certain tolerances, so the customer sees a deterministic component. You control the distribution the customer sees as much as possible.

Someone selecting components for their design will use the tolerance of the component as the parameter of that design. You DO NOT intentionally choose a part with a tolerance wider than your design can accommodate. As I said, if you can't source a component within the tolerance you need, you force that tolerance through IQC, so that your final design is guaranteed to work, because it's always cheaper to test a component than to test something that you paid to assemble with bad parts. You design based on a tolerance, not a distribution.

> Determinism is a good goal but it's not an absolute goal and it's not mandatory.

As I said, choosing to not be deterministic, by choosing a tolerance your design can't accomodate is rare, because it's baking malfunction and waste into the design. That is sometimes done (as I said), but it's very rare, and absofuckinglutly never done with resistors tolerance selection.

> But your post gives off some pretty string implications that you need a 0% defect rate, and that's not realistic either.

> There's a point where decreasing defects costs more than filtering them.

No, defects are not intentional, by definition. There will always be defects. A tolerance is something that you can rely on, when choosing a component, because you are guaranteed to only get loss from actual defects, with the defects being bad components outside the tolerance. If you make a design that can't accommodate a tolerance that you intentionally choose, it is not a defect, it's part of the design.

0% defect has nothing to do with what I'm saying. I'm saying intentionally choosing tolerances that your design can't accommodate is very very rare, and almost always followed by IQC to bin the parts, to make sure the tolerance remains within the operating parameters of the design.

I feel like this has lead to circles. I suggest re-reading the thread.

Maybe you could give an example where you see this being intentionally done (besides Wish.com merchandise, where I would question if it's intentional).


Nope, still averages to +/- 5%.

To give an example, let's say you've got two resistors of 100 Ohm +/- 5%. That means each is actually 95-105 Ohm. Two of them is 190-210 Ohm. Still only a 5% variance from 200 Ohm.


Can you assume that +/-5% isn't linearly distributed? If so, the tolerance in practice may likely end up even smaller.


There's a fundamental misunderstanding here.

Tolerance is a specification/contractual value - it's the "maximum allowable error". It's not the error of a specific part, it's the "good enough" value. If you need 100 +/- 5%, any value between 95 and 105 is good enough.

Using two components to maybe cancel out the error as you describe. On average, most of the widgets you make by using 2 resistors instead of one may be closer to nominal, but any total value between 95 and 105 would still be acceptable, since the tolerance is specified at 5%.

To change the tolerance you need to have the engineer(s) change the spec.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: