Continue to Site

Welcome to

Welcome to our site! is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

EM Simulation Webinars available online

Not open for further replies.


Advanced Member level 3
Aug 3, 2005
Reaction score
Trophy points
Activity points
electromagnetic simulation

Hello All,

A good webinar on Planar EM Simulation available online from AWR Inc., see the details below
EM Simulation: A Look Under the Hood - Part 1
Electromagnetic simulation is becoming a critical component of successful design by providing accurate S parameter models for critical parts of the circuit, where existing models are suspect or missing.

It is also possible to obtain misleading answers if the designer does not understand the underlying physical and mathematical concepts behind electromagnetic solvers. In this webinar, the method of moments algorithm used by EMSight is explained (the built-in solver in Microwave Office). Emphasis is placed on the assumptions used in meshing, and Green's function concepts.

Examples will be given of common problems resulting from misunderstanding these issues.

Use the following link:

click on
EM Simulation: A Look Under the Hood - Part 1


Hi Manju -- Thanks for posting the link. I think it will be good for people interested in developing their own MoM EM code (there has been quite a bit of interest in that in this forum and the RF forum lately). The author is a first rate EM researcher and a personal friend. It will probably be a bit too detailed and theoretical for microwave designers, but there is some practical application information scattered through it.

The theory presented is almost entirely a subset of the theory used in Sonnet, much of which I published back in 1986, and which was used directly in developing the AWR tool. (AWR also used our tool to validate theirs.) I must admit I was a tiny bit irritated because there is no acknowledgement at all of my work being the primary source for the theory used in their tool.

However, on the bright side, because of the similarity, it was really easy for us to build a very nice interface from Sonnet to AWR, an effort that AWR actively assisted. Thus, when the AWR user gets problems that need features and capability only Sonnet has, they can use Sonnet very easily. It is literally just a single click to switch. In fact, they can even use the free SonnetLite (AWR does make you buy their tool before they let you use SonnetLite, though).

It is this open-ness, in which I think AWR leads the entire field, that I think will determine the relative success or failure of framework vendors by the end of this decade. Frameworks that are closed to any even slightly competitive point tools simply can not succeed.

Dear Rautio,

Many Thanks for the detailed comments...
Also I think it helps for the design engineers too
on understanding the EM simulators better...


Hi, If i am not wrong the method of simulation implemented both in sonnet and awr is the same..they are on grid simulators, and the structure is in a closed box.
Completely different solution is implemented in momentum, ensemble... they are pen simulators, without grid...
It will be very interesting a camparison between the two method (not between the products) to have a more deep undestanding of waht methid is better to solve a specific problem.
Can anyone share their experience on this?


You are right!!

Regarding difference between the solvers....ther is a very good book available (see the details below)...go throgh the same
Microwave Circuit Modeling Using Electromagnetic Field Simulation
By Daniel G. Swanson, Jr.
Wolfgang J. R. Hoefer from Artech House

Specifically:Chapter 5 Moment Method Simulators 89

5.1 Closed Box Moment Method—Strengths 89
5.2 Closed Box Moment Method—Weaknesses 89
5.3 Laterally Open Moment Method—Strengths 90
5.4 Laterally Open Moment Method—Weaknesses 90
5.5 Issues Common to Both MoM Formulations 91
5.6 Exceptions to General MoM Comments 92
5.7 50-Ohm Microstrip Line 92
5.8 MoM—Cells and Subsections 95
5.9 MoM—Validation Structures 96
5.10 MoM Meshing and Convergence 98
0.0005.10.1 Uniform Versus Edge-Meshing 99
0.0005.10.2 Microstrip Convergence 100
0.0005.10.3 Summary for Meshing and Impedance Convergence 101
5.11 Controlling Meshing 102
0.0005.11.1 Meshing a Microstrip Tee-Junction 103
0.0005.11.2 Meshing a Wiggly Coupler 105
0.0005.11.3 Meshing a Printed Spiral Inductor 105
0.0005.11.4 Meshing Printed Capacitors 107
0.0005.11.5 Meshing Overlay and MIM Capacitors 111
0.0005.11.6 Exceptions to Mesh Control Discussion 113
0.0005.11.7 Summary for Mesh Control 113
5.12 MoM—Displaying Voltage 114
5.13 MoM—Calibration Structures 116
0.0005.13.1 Microstrip Ideal Short Circuit 116
0.0005.13.2 Microstrip Open Circuit 118
0.0005.13.3 Microstrip Thin-Film Resistor 118
0.0005.13.4 Summary for Microstrip Calibration Structures 121
5.14 Visualization 122
References 122


Hi mwmmboy -- First, of course, I work for Sonnet, but I try my best to give both advantages and disadvantages whenever possible.

I use the term shielded, because "closed" could be misread as though it does not interface to other frameworks. I also use the term unshielded, because "open" could be misread as though it does interface to other frameworks, even in cases where it does not.

Both types use MoM (Method of Moments). This approach divides a circuit (the metal part only) into many small subsections. It then calculates the coupling between subsections (i.e., put current on one subsection, calculate the voltage induced in another subsection). These pair-wise couplings fill a big matrix. The matrix gets inverted and the problem is solved.

Shielded analysis calculates the coupling by a 2-D FFT. A single 2-D FFT calculates all the coupling between all possible subsections of a given type (say X directed coupling to Y directed subsections, etc.) on a given level. This absolutely the fastest way to do it, by far. In fact, it is so fast, there is absolutely no point to store the results of the FFT for later access. It is also the most accurate, the coupling is calculated to full numerical precision. The numercial noise floor (which we have measured in multiple ways) is typically 100 to 180 dB down, most commonly 120 to 140 dB down. (You can not get accurate data once you are within about 20 dB of the noise floor.)

When you use the FFT for signal processing, you must first uniform time sample your signal. Same thing in MoM. Only now, we uniform space sample the surface of the substrate. This is the principle disadvantage of the shieled/FFT approach. Fortunately, you can make the grid size very fine (mesh the substrate 1000 x 1000 cells is no problem, as long as you keep your subsection count under about 20,000). At this meshing level, the cell size is about the size of a pixel on a computer screen.

The unhielded tools use numerical integration. Because you can integrate anywhere you want, your subsections are now not restricted to a grid. They can be any size or shape you want. However, because they use numerical integration, the coupling between subsections is not calculated to full precision. There is numerical integration error. They typically refine the numerical integration with a target of 3 digits of accuracy (including zeros to the right of the decimal place). I have seen results from unshielded tools showing noise floors ranging from 40 to 80 dB down. 60 dB down is pretty typical. In addition, the calculation of the field due to a patch of current ("Green's" function) is somewhat lengthy, but the tools can typically save that calculation to disk for later use. It must be recalculated only when the dielectric stack-up is changed.

Another situation, commented on in the thread a few days ago in this forum, "Doubts about deembedding", might also be of interest to some readers (you have already seen it and added an appropriate comment). Basically, shielded analyses have direct access to a perfect ground reference for thier ports, and thus are capable of exact (to within numerical precision) results. Unshielded tools do not have this capability.

Basically, I always recommend designers use both shielded and unshielded tools, even though my company sells only a shielded tool. However, it is critcally important that the designer be fully aware of both advantages and disadvantages. Anyone who knows only the advantages of only one tool will have problems.

And Dan Swanson's book (recommended above) is absolutely first rate. It should be required reading for anyone doing EM analysis.

Rautio, your explanation is very clear and can be very useful for everyone.


Yes...I seen Rautio always gives detailed explaination & with Pros & Cons of such topics...
It is excellent inputs/information for EM tool dev. & circuit designer...

Thanks...& Keep it up...

One should probably note that the term "noise floor" as used above is not well defined. The accuracy of these tools is highly dependent on the type of problem. There is no known general standard for accuracy of MoM. What is available is some benchmark problems...

loucy said:
One should probably note that the term "noise floor" as used above is not well defined. The accuracy of these tools is highly dependent on the type of problem. There is no known general standard for accuracy of MoM. What is available is some benchmark problems...

There are some very good benchmark problems that help us compare the relative "noise floors" of these codes. Here are a couple:

1. Zero-length through line. I think this was published by Dr. Rautio some time ago. Take a transmission line of any length (though it is probably best to keep it under a half wavelength), any kind (use a planar line for testing a planar EM code) and any Zo. Set your de-embedding reference planes to touch in the middle. Simulate and use de-embedding. Look at the |S11| data; this will show the noise floor of your calibration technique. Perfect results would show |S11| of zero (negative infinity). Any other value is error, since you should be left with an ideal zero-length through line.

This is just like what you do when you calibrate your VNA for a measurement, and then want to check the noise floor of your calibration. A simple and effective comparison based on what we use in a practical lab situation.

2. Coupled through-line. Just copy the line you made in #1, and make a second one. This will be a 4-port example. Set the reference planes for your de-embedding to touch in the middle. Let's say you excite port 1, port 2 is the through port, and ports 3 and 4 are the coupled ports. Now look at |S11|, |S31| and |S41|. If the software is perfect, you get zero (negative infinity dB). Any other answer is error. Compare the values you get between any high frequency EM code results.

High values of |S31| and |S41| indicate that your software de-embedding algorithm doesn't remove cross-coupling between adjacent lines up to a reference plane. This can be very important for some design challenges.

I am sure other people have other ideas.

Such benchmarks are important to engineers so that we can learn the limitations and shortcomings (as well as the strengths) of the different tools that we use in design. This makes us smarter and more successful designers. We can never fully trust ANY software tool no matter what the software vendor says. Rarely will the software vendor tell you where they have limitations.


The point is that just because some tools give perfect answer for some benchmark problems doesn't mean that its "noise floor" is -Infinity.
Even the zero length through line have variations--different frequency, layer stackup, etc. It is possible to run into a "counter example" that break a normally perfect tool. (e.g. try a line that supports higher order propagating modes or even "leaky" mode.)


I think you will find that no software code will ever yeild "minus infinity" noise floor. The fact is, ALL EM software codes have error; I have never seen a code do better than -150 dB for the through line test.

I think you're right--the results you get will depend on stackup, frequency, materials, etc. And if there are higher order modes on the line, you will see strange results. However, the same is true if you measure the part you simulate, (the calibration of most VNAs for device measurements assume you are using only one propagating mode--at least for planar transmission systems), so I think this is really what we want.

We want our software to predict what we would measure.

We also want benchmarks that will show us whether the particular package we choose to use will work well for our particular technology or stackup. I know I don't want to use a tool for my designs that, although it might work fine for single-layer alumina boards, doesn't work well (as shown in benchmarks) for my 5-layer PCB.

So I agree with you completely. No tool is perfect, and some will perform poorly for some benchmarks in some materials, and better for others. This in turn helps me to pick a tool that gives me the best confidence in matching what I would really build and measure later. (And the measurement of the part that I design is where I put my career on the line.)


Hi Folks -- On vacation right now with limited internet access,.

First, you can tell who the sales people are because their working assumption is that their EM tool gives the correct answer. My working assumption is that all EM tools give the wrong answer. This is true for Sonnet and for everyone else. The question we have to answer is just how wrong is the answer. (If an EM tools gives an exact answer, or an answer with much smaller error than one would expect for a given subsection size, I would suspect either fraud or careful data selection.)

One (of several) sources of error is numerical noise. Numerical noise is what causes the noise floor. Numerical noise comes from one main source: finite precision (and associated loss of precision during addition and subtraction of numbers). In Sonnet, this predominates during matrix fill, all numbers are calculated to full numerical precision (double precision is default, you can switch in single or quad as desired). During addition and substraction of numbers that are nearly equal, you can end up with fewer significant bits in the mantissa.

In practice, this directly results in a noise floor. Loucy is correct that this is not well defined. In addition, Loucy is correct that it depends on many factors. However, these are not reasons to give up on characterizing the noise floor. One thing I do is to give the noise floor as a range (see postings above). There are many ways to characterize noise floor, and, most interestingly, they tend to all give answers in about the same range for a given analysis/frequency range/subsection size. Max suggested a couple ways above.

As for unshielded analaysis, there are a couple more error sources. Specifically, numerical integration and deembedding. For deembedding, any error in determining Zo translates directly into error in the calculated Y or Z parameters (5% error in Zo, means 5% error in Z parameters) and that translates into error in the resulting S parameters. See the thread on doubts about deembedding in this forum. The other source of error is the numercial integration error as described above.

It is important to understand the noise floor, even if it is not trivial to characterize.

Is numerical integration a different/additional type of error from finite precision of floating point calculation? No. Total "Noise" of unshielded tool != "noise" from finite precsion floating point calculation" + "noise" from numerical integration + "noise" from deembedding.

Numerical integration = weighed summation as far as computer is concerned. In MoM formulation for shielded circuit, there are several summations of infinite series (corresponding to infinite number of modes). One has to truncate them somewhere for numerical computation. Therefore shielded analysis is also subjected to this "truncation error", which is similar in concept to that of numerical integration for unshielded analysis. The magnitude of these errors obviously depends on a lot of factors, but the point is that just because there is no apperance of numerical integration in the shielded formulation doesn't mean that it is more accurate.

As for de-embedding, it is more of an art than a science at this point--there is no standard answer. The implementation in both shielded and unshielded tools would have "errors", when compared with a "perfect de-embedding scheme"--if there were one.

Finally one can also use FFT in unshielded analysis.

Of course, the accuracy of Sonnet's implementation has been attested to by many users in this forum. The comments above sound like an attempt to justify it from a theoretical point of view. It is more like some business statement to me.

Rautio, I totally agree with you about the fact that every simulator is wrong and the best simulator is of course the one whose error is smaller for most kind of circuits.
Who works on microwave circuits knows exactly the is is impossible to measure a 50 Ohm matching on a 50 Ohm line high as 150 dB, so my opinon is that noise floor is very important for EM simulator, but it si not the magnitude of the noise to indicate is a simulation is good or not.
In my opinion another foundamental aspect is the speed of the simulation I am sure anyone can accept a larger error, but obtained very fast.
What are the diferencies in the two approach from this point of view?


I would expect any person to logically say that his/her software gives the correct answer for certain kinds of problem. Most people understand that the term "correct" doesn't require infinite accuracy. So the important message is the kind of problem. Sales people would simply state that the "noise floor" is a certain dB, without specifying the type of problems.

Hi Loucy -- A given result can be called both correct and wrong at the same time and both descriptions are accurate. The thing I am getting at is the attitude of the observer. If the observer calls the answer correct, now there is nothing more to do. This is nice for salespeople, but for an EM researcher it is the end of the line. If we call the exact same result wrong, now we have a most interesting problem to explore...just how wrong is it? This is when I get interested and neat things can happen.

The discussion on error analysis is certainly an interesting one and yet I want to relate it to something that is of even bigger importance (to me). This is what I call set-up "reproducibility". That is, how accurate one may lay the circuit on the bench and test (measure it). Many times the design engineer has a lot more freedom in setting up the problem, solve it and yield results but it does not necessarily means results are to be consistent with meausrement. In the real world any circuit needs be probed and launchers deembedded. But calibration is not panacea, right. It has region (freq) of validity and may certainly affect the structure. Not gonna delve into this here, but my point is that results that are most heavily affected by so called "numerical noise" are most of the time "smashed" by measurement imperfections. If S11 by solver is -55 dB and by measurements it is -32 dB is that due to noise or due to bad connector or due to overstimated loss?? It is indeed rare that SW tool will be as mush "numerically noisy" as to fundamentally screw up. I agree that argument on integration versus series summation may be somewhat misleading as a good Gauss quadrature routine yields very accurate results either. Provided freedom in meshing it may eventually yield even BETTER results on some structures or in many occasions yield comparable results but FASTER.
Well, does it mean the accuracy is differential or integral problem - is it local or global - that is - using series summation - does it lead to better overall accuracy of a circuit or not? To me this is a central item and what deserves attention - else everything else is very very subjective and may be misleading.
But most important we should not forget that any simulations is in fact distortion of the reality and the reality is down on the bench. This is why i strongly encourage microwave measurements course to every designer before embarking on cklicking the mouse. I have seen astonishing results and MONTHS spent by designers refining "on simulation" level circuit just to here later "damn it - it all sucks":)


P.S It is not true that the only meaningful way for "closed" structures is by FFT - Momentum as well as EM3DS use different approaches and I won't call their results inacurate too. EM3DS embodies "asymptotic estimator (the name implies ERROR)" and we found it working superior regardless the noun "estimation" on many circuits.

Hi Cheng -- You raise some good points. Much of my early career was spent in microwave measurements and ANA work. I was even a member of the ARFTG Excom (and I still interact with those guys too). (Automated RF Techniques Group, if you do measurements, you should join the group.)

If you look at my past postings, you will notice that I often recommend both a shielded (i.e., our own tool, Sonnet) and an unshielded (i.e., Agilent Momentum).

A good microwave designer has to look at the complete error picture, and that includes measurement error, and fabrication error, as well as analysis error. I suggest looking at it like a budget. You are given a maximum amount of error that you can "spend". For the final product, your total error is the sum of all three. In the present day, I only have control over the analysis error. So, we have made that error as small as possible. This lets you devote the maximum amount of error budget to measurement and fabrication error.

One reason I am very sensitive to the issue of reducing error is I spent the first eight years of my career doing microwave design, first on Alumina, then on GaAs. I have had to meet +/- 0.1 dB specs. I have had to get 2% bandwidth filters to meet +/-0.1% bandpass and +/- 0.5 nS group delay requirements. And on and on and on. I'm not sure, but I think I am the only EM vendor founder that started out doing a few years of front line microwave design. This means I look at things differently from the others who have never had to do a real design and meet a real spec.

You are completely correct, in some situations, measurement/fabrication error swamps out the analysis error. The situations I describe above, analysis error swamps out the measurement/fabrication error. These are the situations where low analysis error can be helpful. Which situation happens "most of the time" depends on what you design. And this is a consideration in what tool you use.

I will point out that if in your case, measurement error swamps out analysis error most of the time, you should consider just using circuit theory and not use EM at all. Circuit theory nodal analysis is much much faster. A skilled designer can come pretty close using a good nodal analysis (I know, I used to do it all the time long before EM tools were available).

I am guessing that the asympotic estimator you mention in EM3DS might be interpolation. All EM tools have interpolation now and they work moderately well for limited bandwidth. Sonnet's interpolation (ABS, Adaptive Band Synthesis) is different, it can interpolate over extremely broad bands and it uses far fewer data points. For example, it can interpolate a complete 6 resonator bandpass filter response with analysis at only 4 data points.

Bottom line, as I said before, it is best to use multiple tools and be familar with the advantages and disadvantages of each. If you do that and your competitors do not, you will win.

The "asymptotic estimator" in EM3Ds might be referring to a technique to estimate the "tail" of the infinite summation. If this is the case, then such a value is similar to the "static" component of the Sommerfeld integral in the unshielded case. The static component is independent of the frequency (freq^0). EM3Ds might higher order terms, then it can be viewed as some sort of frequency interpolation technique. What relevant to the above discussion is that the "noise floor" or error due to integration/summation discussed above must have been an important issue in "designing" such a "asymptotic estimator". If they had a perfect standard for accuracy, they would have gotten an asymptotic calculator. "asymptotic estimator" sounds more logical because the exact value of the tail is difficult, if not impossible, to obtain--even if computing resource (time etc.) is not restricted. It is therefore difficult to know the magnitude of the error ("noise"). Even though EM3DS doesn't make any claim as to its "noise floor", people would still think it gives correct answer.

Not open for further replies.

Part and Inventory Search

Welcome to