Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Monticarlo Analyses in Cadence

Status
Not open for further replies.

Junus2012

Advanced Member level 5
Joined
Jan 9, 2012
Messages
1,552
Helped
47
Reputation
98
Reaction score
53
Trophy points
1,328
Location
Italy
Activity points
15,235
Dear friends,

I have some question about Monticarlo setting in cadence please,

1. Is the "Random" sampling method is the appropriate one or the Latin is better, or may be other?

2. How can I calculate the right number of samples, suppose I am going for 99.7% yield or different. Genearally the higher number of samples should be better but how much is higher, may 1000 or 2000 or I don"t know

3. After running the monticarlo simulation and getting the Yield result, there is a tab called "Confidence Level", I saw many youtubes about monticarlo and people are not changing it or even by the Cadence manual. but If change it to 95 % forexample I can see a drop in my yield.

I apprecitae your help
Thank you very much
 

I usually use Latin Hypercube, but probably any of the other 2 will do.
Generally, the more MC runs you do, the better, but it trades off with simulation time. If your circuit simulates faster, you can go to 200 or 500. I usually simulate for 100 or 200, although I've seen people go for 50 only.
I have never used the confidence level. All you really need is sigma and it is readily available as a result from the simulation. Then you can see if 3 sigma or 6 sigma still meets your spec.
 
Dear Suta,
Thank you very much for your reply

I have just finished the reading about the Latin Hypercube, it can give the same accuracy of the sampling method with less number of samples. I am also going to use it but in the setting it requires to set the number of bins when you use it , also in the Cadence documentation is not stated about how to set this number, how much you give for it ?

So Suta if you use the Latin method it means you need to wait till the end of samples simulation, not like in Random where the simulation can be stopped if it is over or under the target yield.

In my Cadence Version I don't have a field to alter Sigam, however I think the yield of 99.7 % is as same as setting sigma to 3, is it right ?

I found this useful article from Cadence, as I understand from it , setting the Confidence level is necessary with Auto Stop

https://community.cadence.com/caden...-of-analog-design-part-2-monte-carlo-sampling

One more thing Suta, running 50 or 100 I dont think it is enogh, please see this block as well

https://community.cadence.com/caden...s/fast-yield-analysis-and-statistical-corners

Thank you very much for your help
 

I typically used anything between 500 and 1000. I plotted the sigma as a function of iteration and until I saw a stable sigma and that was typically 500-1000 runs.
 
If I remember correctly, it was Latin hypercube.

Aah yes, the golden promise of variation aware design and how it will help designers not over-design their circuits. Take those numbers with a pinch of salt as Cadence can have a vested interest in making the numbers larger to push their rather expensive products.
 
Latin HC gives better result when you have limited resources to run very high number of simulations. But random sampling is useful if you found troubles, because with random sampling easy to repeat that condition where an issue occured. As I remember with Latin HC it wasn't possible because it uses previous run settings from run to run, and to repeat a condition you have to repeat all the previous runs.
But just to find sigma Latin HC is better.
I run 1000 sims if they quite fast, and to estimate sigma of very slow simulations sometimes 30 run is enough.
 
Latin HC gives better result when you have limited resources to run very high number of simulations. But random sampling is useful if you found troubles, because with random sampling easy to repeat that condition where an issue occured. As I remember with Latin HC it wasn't possible because it uses previous run settings from run to run, and to repeat a condition you have to repeat all the previous runs.
But just to find sigma Latin HC is better.
I run 1000 sims if they quite fast, and to estimate sigma of very slow simulations sometimes 30 run is enough.

Dear Frankrose,
Thank you for your reply,

Right now I am repeating the simulation every time from the start, may be later I would repeat it around the critical points as you suggested.

I see you run 1000 samples if you have fast simulation, it means if you have for example transient simulation with considerable amount of time then for sure you are not going for this number, However what is the minimum number of sampling that can I take so people can trust my result, by default the number is 200 in cadence. Also please if I would use the Latin HC there is a field called bins, what is the proper value of it ?

Dear friends you are talking about sigma estimation, what this value tell me or how to see it. As far as I set my simulation to Yield=99.7 it means Sigms = 3.

Thank you once again
 

I know designer who likes to run 10000 simulation to get the sigma in MATLAB. Probably he wouldn't trust in 30, however my boss simulated 100 times when he wanted accurate result after the design phase. In practice we have never used predefined confidence levels, target yields or LDS, just checked the sigma. My opinion is to simulate the most you can afford, there is no lower limit, your time is the upper limit rather.

I have no idea about the bins, it is not related to the plot bin numbers probably, I found a page where it is mentioned to get orthogal sampling the number of bins should be higher than 1, but I don't know is it trustable or what is the meaning exactly. https://sites.google.com/site/inven...cm/mcm_cadence/mcmc_simple_example/setup_cont
And unfortunately the cadence reference I found is from 2011, where they only say the "numbins=0" variable:
Number of bins for latin-hypercube(lhs) and orthogonal method. The number is checked against numruns + firstrun - 1, and Max(numbins, numruns + firstrun -1 ) is used.
Not so talkative. In Cadence help maybe you could find more about it if you have the latest Spectre Reference. Or maybe not. I would try Cadence Forum.
 
Dear Frankrose,

Thank you very much for your kind answer,
It is useful to know from your experiment, yes indeed the upper limit of choosing the samples is only the time,
I will double check the cadence manual to see if it is mentioned more in some place.

I understood all what you explained except the part of check the Sigma, which was also my question before.

Thank you once again
 

Sigma is the standard deviation of parameters, and there is a "6sigma" method, which is a good measure of product reliability. https://en.wikipedia.org/wiki/Six_Sigma
Designers in practice try to keep whole deviation in the range of 6*sigma to minimize number of defects, a kind of thumb rule for manufacturing, a really good one.
In ADE XL the standard deviation is evaluated automatically for every expression where the result should be a number, and after MC simulation it should appear for these expressions in the Results tab. If you don't see there is an "stddev" function in calculator.
 
Thank you very much frank,

Tomorrow I will simulate it and I will present the result image here.

Meanwhile please I have this questions, Do MC simulation covers the worst case corner of the design model (WP,WS,WO,WZ) ?

Related question please, The worst case simulation doesn't take a lot of time as MC do, and can give an answer if the design will work or not, despite it is a digital answer but the design will surely work. However, I still see the people are going in favor of testing the design with MC. Is the reason behind it to have statistic infromation about the Yield or other advantagues of MC over the Worst case corners?

Thank you very much
 

Is the reason behind it to have statistic infromation about the Yield or other advantagues of MC over the Worst case corners?

Obviously it has got.

But first, the process worst case parameter values are generated from statistical measurments, so the difference between a "fast" and "slow" corner is 6 sigma.
This could be the reason why you have doubts you should run MC simulations, but now you should also see that MC simulation actually covers a wider range of variation than 6*sigma.
In some cases only few sample values can be much further than 3 sigma from the mean or nominal value, which alerts the designer there is a trouble somewhere maybe. This is a reason why we run MC.

Second, the obvious reason, that if you simulate a fully differential circuit under corners, and you have an NMOS differential pair for example, you should know that under corner simulation those transistors change together. Thus, DC offset will be zero in every corner, doesn't matter "fast" or "slow" transistors until they are symmetric. So you should run MC, because it changes same type transistor parameters in the same and opposite direction too, and it will creates asymmetry, and tell a lot about expectable DC offset.
 
The number of bins when using Latin Hypercube method is equal to the number of MC runs (i.e. samples), This is a requirement of cadence and it complains if it is not like that. So if you run 100 MC runs, then you set the bins to 100. As was mentioned above, this has nothing to do with the bins of a histogram.

It is also not true that if you use Latin HC method you can not run individually the MC run that showed problems. It is possible.

As I said initially, I run 100 or 200 MC samples but it really depends on how fast the circuit is simulating. Obviously, if a single simulation takes one week to complete it makes no sense to run 500 MC sims, because it will never finish even if you run it in distributed mode.

You can run corners, pick the worst-case corner and then run MC on that. Corner simulations do not cover the statistical variations of the process and circuit components. Corners are like the global variations, in a way the deviation from the typical recipe that the factory uses to cook the chip. This means that if your corner is not typical but say slow-slow, it is slow-slow for all devices in the wafer. But then device to device on the same chip, there are random variations (stress here is on random) which are captured by running MC on that corner.

Sigma is the usual quantity one looks at when doing statistics and it is a measure of how much your tested quantity is deviating around the mean value. So +/- 3 sigma captures 99.7% of the variation, so if your spec requires 3 sigma variation, that means that your circuit meets the spec within these limits. Some people may go for +/-6 sigma, a tighter requirement.
 
Last edited:
Dear Frank,
Dear Suta,

Thank you very much for your reply and your continuous wish to help me in my posts,

My mentors, I understand from you both that MC include every change randomly introduced to the circuit components, as you both said, performing Worst Corners assume all the components will have the same condition like WP, WS, Wo, or WZ. Even with difffeerent components like capacitors with resistors and MOSFETS there is no possibility that MOSFET will be WP and the resistors WS, ....etc. While in MC every instance can have different variation. This is very important as also Frank mentioned when simulating the offset voltage of differential amplfier.

Now I am convinced from your kind explanation that MC should be enough.

For the Sigma value, yes I am interested in 3 sigma which is also come by default when running the MC (in the settting box target Yield is by default = 99.7%). Hope my thinking is not wrong.

Dear Suta, I also impressed by your concept of running a MC around the critical Corner, I have question about it but I will postpone it right now to avoide mixing the answeres

Thank you once again
 

*********************************************************************************************************************
*********************************************************************************************************************

Dear Mentors,

I just simulated a MOSFET voltage divider using Monticarlo, The ideal output is 1.65 V

In the first image I used 200 samples with Latin HC , Number of bins = 200.

In the second image is the result from Random sampling with 200 samples.

According to the value of Sigma you read, which of them is more accurate, and please how can I know that this value of Sigma is related to 99.7 % yield, I am using Cadence simulator 2012 where I can't define the Sigma before the simulation

Thank you in advance

MC_1.png

MC_2.png
 

I would say, the small difference in the two methods hardly even matters. Your mean is 1.65 and the sigmas differ in the tens of uV level, which is pretty insignificant. But if you want to know where your sigma saturates, then run 300, 500, 600, maybe 1000 simulations and see at which number of runs sigma stop changing anymore.
Sigma is the result from the MC simulation. You can make a decision whether the sigma you get serves your need or does not. For example, you decide that your voltage divider output should not exceed some +/- limits around your mean or expected value. Then you divide the one-sided limit by 3 to get the sigma you need (for 99.7% yield) or by 6 (for 6 sigma yield) and compare with the result from the MC simulation. At which point you decide if your circuit meets the required deviation or not.
 
Last edited:
Dear Suta,

You told that my sigma is differ in the tens of uV level. Kindly are you reading the subtraction from Sigma to Target ? Can you please tell me how you calculate it ?

I will run a 500 sample and I will post it here as soon as possible to see if the sigma is saturating.

After simulating I would please go for the discussion of your second part of the answer

Until then thank you very much
 

No, I meant that the difference in sigma between LHC method and Random sampling, for example for the VCOmax is something like 20uV which is 0.0012% of your mean value. So, from engineering point of view these two sigma values are practically identical.
 
Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top