Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Realistic Monte Carlo setup

Status
Not open for further replies.

komax

Junior Member level 3
Joined
Dec 2, 2019
Messages
31
Helped
0
Reputation
0
Reaction score
0
Trophy points
6
Activity points
312
Hi All,

I'm wondering about which Monte Carlo setup will accurately represents statistical distribution in silicon on mass production.

There are 2 cases that I'm considering:
Case-1: Global Corner + Local MC
In this case I vary the Corner manually (ie. Typ, Fast, Slow, FS, SF) and run Local MC with each of them​

Case-2: Global MC + Local MC (total MC)
In this case both the corner and the local mismatch are using Monte Carlo approach​

If I were to look at 3-sigma statistical distribution, which case would you say more realistically represents silicon?

If I want to be very safe Case-1 would give me worst results but I want to argue that Case-2 is in fact the one that more accurately represents silicon distribution and covering Case-1 is over-design anyone agrees with me on this?
 

MC Simulations are generally based on NOMINAL values of each process variable. MC takes statistical values around this NOMINAL value by its own algorithm and gives you an insight about how statistically much deviation may occur ( and of course how circuit performance is affected by this ).
Corners are other history..
 

Corners are other history..

Well, yes and no. Foundries recommend a mix of global and local MC, just like the OP mentioned. In my experience, case 1 is the prefered method these days. Based on the terminology I am almost certain that the OP is dealing with TSMC.

The idea here is that some process variation shifts affect the entire wafer or the entire die, so you make that a global corner. Some process variation shifts are more localized, affecting nearby transistors differently. If you let the MC engine explore both corners and local effects, it has too many parameters to play with at the same time. You may get good results from it, sure, but what foundries realized is that this analysis is too pessimistic because parameters are correlated. For instance, a certain shift in process makes the PMOS side stronger and the NMOS weaker. But not both at the same time. FULL global MC will, however, generate samples where both are weak and both are strong. These are 'bad' samples, instead the engine could have spent time exploring only strong+weak scenarios.
 

Is it true that if we try Case 2 with a large number of samples, we should arrive at results that are similar to Case 1?

From what I have seen dealing with multiple foundries is that, some only provide a mismatch/MC corner for typical corner and the corner model files do not have mismatch/MC models.
So in order to get an accurate result we used to run a large number of samples to cover all possible cases.
 

Is it true that if we try Case 2 with a large number of samples, we should arrive at results that are similar to Case 1?

From what I have seen dealing with multiple foundries is that, some only provide a mismatch/MC corner for typical corner and the corner model files do not have mismatch/MC models.
So in order to get an accurate result we used to run a large number of samples to cover all possible cases.

I could see Case 2 generating a wider distribution than Case 1, i.e., more pessimistic. My understanding is that this is exactly what foundries are trying to avoid when they recommend Case 1. I would like to hear from someone with an analog/MS background, MC is something I rarely use on my digital designs...
 

My experience is that dunning MC on the worst process corner gives more pessimistic results compared to running case 2 above.
 

Foundries and their PDKs are not to be trusted.

If you want to discover how you're being led, try a
simple simulation of key PCM devices that are used for
WAT. Turn on process and mismatch, set up the WAT
testbenches and see how many times your one
transistor (of each type you care about) will fail WAT
limits for things like VT0, IDsat, leakage under WAT
conditions.

My own foundry CAD group used to hose us with variance
limits that produced failing-WAT results about 20% of the
runs for something or another. This comes from chest
bumping between the fab guys (who want to ship
anything that comes out round and right side up, and
want no tightening of tolerances thank you very much)
and the design managers (who are jumped-up pu$$y
engineers unwilling to go to the mat, to represent for
the designers' burden to produce ever-passing circuits
using non-passing components).

In my little design group we rebelled by performing the
WAT-simulation test and bypassing any MC iteration
which produced a WAT-fail. We managed to sell this
off to the customer and program management, as we
kept the N=100 statistics but could show they were
based on WAT-compliant attributes.

Took some work and some bickering, but we would have
been screwed if we left it as-delivered from the CAD /
modeling groups.

It's easy for some director to say you should make a
4-sigma circuit and let the fab not have to improve
their flow. And then b!tch about how your product
spec window is not competitive with more aggressive
competitors who are willing to eat a few % worth of
yield loss in order to win on (advertised) performance.
 
  • Like
Reactions: nitishn5

    nitishn5

    Points: 2
    Helpful Answer Positive Rating

    stanislavc

    Points: 2
    Helpful Answer Positive Rating
Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top