Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

What exactly are gaurdbands?

Status
Not open for further replies.

c_cook

Newbie level 1
Joined
Jul 11, 2017
Messages
1
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
17
Hi Everyone,

In much of my reading and work I have come across the term 'gaurdband,' relating to something that is used to mitigate undesirable effects in a design. In my case, I see gaurdbands mentioned as something used to reduce the effects of electromigration while adversely affecting timing and area, but no one ever really says what that gaurdband is. Unfortunately, I have not been able to find an actual definition or explanation of exactly what a gaurdband is, but have instead found conflicting or vague descriptions.

Some guesses I have from my research of the topic are:

1. An actual physical feature implemented in layout to isolate and protect parts of the circuit.

2. Another term for Design Rules, e.g., minimum wire width, wire spacing, etc.

3. A general term for any method, technology, or feature used to mitigate or correct undesirable effects in the design.

I have a feeling that it really depends on the context and is used to describe many different things.

Thanks,
 

1, sometimes. in an analog context, guardband could be interpreted as shielding, isolation, separation, etc. it is physical.
2, never
3, mostly. in a digital context, guardband usually means timing margin.
 

I have never heard of "guardbands" used as a means
of improving electromigration. Guardrings in I/O
(and less commonly, core) circuitry are for latchup
and snapback suppression. If you wade into any
foundry PDK documents-set you ought to find stuff
on ESD and latchup rules which will prescribe such
structures' design details.

Guardbanding limits is a common exercise in test
development, expecially for products that see a lot
of test insertions. This has nothing to do with chip
physical design, only test flow economics.
 

This is a very good question!

Very often, in our life and work, people are using various terms and words, whose meaning is assumed to be obvious or well known, when in fact it is far from being well known and obvious.
Often, when when someone asks - "what does that exactly mean?" - most of the people are not able to explain it clearly, or would not even know or understand what lies behind these terms and concepts.

Sometimes, one needs to be courageous, to ask such questions.

So, thank you for asking.

"Guardbanding" is a very general term or concept, meaning - given some uncertainties and variations in real life (design, process, manufacturing and design tools, fabs, ...) - come up and enforce some rules that would guarantee certain design characteristics (functionality, yield, reliability, etc. etc.), in spite of these unknowns and variations.

Guardbanding is a very useful, practical, engineering thing, and is applicable to many different areas.

Guardbanding usually involves some tradeoffs - like reliability vs performance, or power (leakage) vs performance, or yield vs area, or power vs area, and so on.

This is very closely related to (but much wider than) a concept that gained a lot of popularity in IC design in recent 10-20 years called "design for manufacturing" (DFM).

I will give a simple example.
In IC design, the "nominal" process has some spread - some variations in device characteristics, BEOL parameters, etc. - I am talking about global variations here, such as wafer-to-wafer, lot-to-lot, tool-to-tool, etc.
That variations affect both devices (SPICE model parameters), and interconnects (metal / dielectric thicknesses, misalignments, etc.).
To account for these global variations, and to make sure the design (IC) would work for any reasonable variation of the nominal process (say, within 3 sigma window) - and this is called guardbanding against process variations - the foundries came up with so called "corners" - process conditions located at some distance from the nominal, usually at 3 sigma, but sometimes 1.5 sigma, or something else).
Designers have to run simulation / verification for different corners or for combinations of different corners, to make sure their designs are still functional and are within the specs for performance and other parameters (yield, reliability,...).
For devices, these corners are usually called "slow", "typical", "fast", and they are independent for N type and P type MOSFETs - so they are called SS (slow-slow - for nMOS and pMOS), FF (fast-fast), SF (slow-fast), etc. (slow is for high Vt = low performance and low leakage, fast is for low Vt = high performance and high leakage).
For BEOL, these corners may be called like RC worst, RC best, typical, C worst, C best, etc. - for different assumptions on variation of metal thickness, dielectric thickness, and other process conditions, with respect to their affect on resistance, capacitance, and RC delay.

In reality, there are more conditions for the corners - including voltage and temperature - that's why these corners are usually called PVT (Process / Voltage / Temperature).
In advanced technologies, people have to verify many hundreds of different corners...

Another example - suppose there are two n-wells sitting in p-well (or p-substrate), and we want to find our what's the minimal allowable distance between the wells edges.
The tradeoff here is between the area (the smaller the distance - the better), and leakage between these two n-wells (the smaller the distance - the worse - because of the merging of depletion regions of tow p-n junctions, and finally disappearance of the potential barrier between the wells as we move them closer to each other).
Suppose we made and measured a series of test structures, with different distance between the wells, form very large to very small.
We plot the dependence of the leakage as a function of the distance.
Then we draw a horizontal line at a leakage level that is acceptable.
We find what distance corresponds to the maximum acceptable leakage.
Then we increase this distance by 10%, or 20%, or 0.1 um, or whatever - here the knowledge of the problem is very important, and knowledge of process control - and declare this to be a design rule for distance between the wells.
The increase is done to account for possible unknowns - like mask misalignment, or doping level variation (in n-well or p-substrate), or something else.
This would ensure that the leakage does not exceed the allowed level no matter what process variations are happening in the fab.
This increase of the distance is called "guardbanding".

Yet another example, from non-technical life - you have an appointment (dental, lunch, girlfriend, customer, ...), and it takes normally 10 min to drive from your home to the meeting place.
But, you take off 15 or 20 min before the start of the meeting, instead of 10 min, to make sure you are not late, due to unforeseen circumstances - traffic, road accidents, etc.
This is also guardbanding, but normally not called so for non-technical situations.

I have not seen any book or other publication that would explain the concept of guardbanding similar to my explanation above.

Max
----
 
Last edited:

Suppose you have an input that is high impedance and low voltage (e.g., electrometer inputs) that we want to connect to an op-amp input. The PCB traces are congested and there may be leakage currents that will upset the measurements. So we draw a trace (this is also called a guard band) surrounding the input trace (encircle it fully) and this guard band is driven by an unity gain amplifier (the first amplifier the signal meets). Because the guard band and the signal trace are at the same potential, the leakage error will be absent.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top