Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[SOLVED] Designing for reliability 2x rating

Status
Not open for further replies.

Jester

Full Member level 6
Joined
Aug 18, 2012
Messages
377
Helped
7
Reputation
14
Reaction score
7
Trophy points
1,298
Location
.
Activity points
4,754
When selecting devices such as capacitors, diodes, FET's etc. I was taught to select a device with 2x the voltage rating to ensure reliability.

In some cases a part with say a x1.8 rating is significantly less expensive than the next step up. For example 4.7uF input capacitor for a buck regulator with a nominal input voltage of 28Vdc

This 50V part is priced $0.28, $0.06 qty1, 1000 https://www.digikey.com/product-search/en?keywords=587-2994-1-ND

While the 80V part is priced $1.03, $0.41 qty1, 1000 https://www.digikey.com/product-detail/en/GRM32ER71K475KE14L/490-9972-1-ND/5026477

I'm inclined to use the 50V part. The buck regulator is rated 40V max, so it seems over-kill to use the 80V part. Does anyone disagree?
 

When selecting devices such as capacitors, diodes, FET's etc. I was taught to select a device with 2x the voltage rating to ensure reliability.

I doubt that it's a reasonable general guideline.

In the specific case of high Er ceramic capacitors, factor two voltage rating is often chosen according to the C/V characteristic rather than reliability reasons. Determine the actual capacitance with 28 V DC from the data sheet and decide if it's sufficient.
 
I too doubt it is a reasonable guideline.
In some cases it might be totally impractical or unnecessary.

With capacitors I would usually be more concerned about operating temperature and ripple current rating than specifying voltage ratings vastly beyond anything the part will ever see.

The provenance of the actual parts themselves is becoming a real concern these days too. There are those counterfeit "Dragon Brand" parts that have caused circuit designers and production people real grief.
 
  • Like
Reactions: Jester

    Jester

    Points: 2
    Helpful Answer Positive Rating
Old timey rules-of-thumb were based on the materials
systems, the quality, and the reliability qualification
methods of the day.

Now you have better materials quality, but you also
have vendors designing every bit of slack out of the
product, pushing those materials and construction
harder toward zero input cost. And the industry has
gotten more "sophisticated" in the sense of applying
"use models" to qualification and design-for-reliability
(for example, saying that a cell phone spends only
5% of its service life actually transmitting, so your
PA needs to survive 6 months full power operation
and not 10 years - and, oh, by the way, that's at
40C and not 70C since it's in someone's pants
pocket). The "padding" that used to flow from the
hard-core MIL / industrial product family members
has largely been burnt away by commercial pressures.

You have to peer really closely at the reliability
info, if you can find it, to be sure that you're not
pushing real hard on something that has no margin
left, or presumptions about life cycle and environment
that suite somebody else but not you.
 
  • Like
Reactions: Jester

    Jester

    Points: 2
    Helpful Answer Positive Rating
A very long time ago as a youthful student of electronics, I attended a training course that was to teach us about calculating MTBF so beloved by the military and aerospace people at that time.

This worthy fellow stood up before us all with a perfectly straight face and stated quite categorically that a 1K static ram has twice the reliability of a 2K static ram.
He went on to explain that the larger circuit has twice as many failure points, so it must be twice as likely to fail.

So we all did these B.S. exercises counting the numbers of various types of components in a circuit, multiplying each by a statistical reliability factor. And come up with an overall reliability figure supposedly accurate to about three significant figures.

Dick Freebird has nailed the problem, in that electronic failures are ALL mechanical in nature. Which comes down to materials science and quality of assembly.

We have come a very long way from paper and wax insulated capacitors, solid carbon resistors, cotton covered copper wire and transformers using wooden wedges during assembly typical in the 1930's.

Today its more about penny pinching, and outright fraud and deception in the supply chain.
 
  • Like
Reactions: Jester

    Jester

    Points: 2
    Helpful Answer Positive Rating
I wouldn't say that all failures are mechanical, unless you take
a mechanistic view of things like charge trapping, gate oxide
wearout and metal electromigration (all things that proceed at
atomic scales when you get to the bottom of it all). But the
same macro forces are at play, whether it's metal fatigue or
oxide charging. People are no damn good and their products
are no better. Present company excepted, naturally.

It might be true that a 1K SRAM is twice as reliable as a 2K
-if- certain assumptions are indeed true - such as, the rate
of failures is dominated by memory array wearout, the foundry
processing is identical, geometries are unchanged and so on -
these latter two, at least, hardly being true across memory
size generations. Probably the exercise was only for illustration
of methods. But you know what they say about can, do, teach.

I don't know if the guy bothered to make the point that a
system that needed 64K (woot! woot!) of RAM would see not
much difference between (64) 1K and (32) 2K packaged parts
reliability-wise, inward-looking. But the much greater number
of solder joints probably dwarfs all the other actors and this
is one primary motivator for integration in "HiRel" electronics
(another being, a 6-transistor part costs you $150 and a
100K transistor part costs you $500, because the costs of
qualifying short-run product are mostly incompressible human
inputs amortized over hardly any units).

But anyway, paying close attention to even one niche example
will drive home the only worthy generalization - that generalizing
and "similarity" arguments are the first refuge of the incompetent.
Trust noone. Then at least you will not be disappointed. The
surprise, however, will "probably" (heh) come regardless.
 
  • Like
Reactions: Jester

    Jester

    Points: 2
    Helpful Answer Positive Rating
Reliability is a fascinating subject.

As a confirmed petrol head, I was astonished when EFI started to appear on mass produced vehicles around the early 1970's.
These things were so incredibly complicated with their four bit microprocessors and TTL logic. Way beyond the ability to fix at the side of the road, even by a qualified mechanic when the engine suddenly died and left you stranded.

Even today I am amazed at how reliable modern electronics are, given the ever increasing complexity and component density.

If the old theory about adding up the number of resistors and capacitors, and silicon junctions will give an accurate estimation of mean time to failure, it seems we are truly beating the odds in a really spectacular way.
 
  • Like
Reactions: Jester

    Jester

    Points: 2
    Helpful Answer Positive Rating
Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top