Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Trying to get my head around SMPS compensation - stability problem

Status
Not open for further replies.

KX36

Junior Member level 2
Joined
Jul 16, 2011
Messages
23
Helped
2
Reputation
4
Reaction score
2
Trophy points
1,283
Activity points
1,579
Hi all,

Im trying to teach myself SMPS design. All theory, SPICE and ideal components at the moment and things are going quite well.

Using International Rectifier's AN-1162, I have mostly working SPICE simulations of various buck and buck-derived converters with both type II and type IIIA compensation, but so far I've used an ideal opamp with GBWP=1GHz.

Here's the problem, In my small-signal analysis of a forward converter, I can lower the GBWP as far as 5-10MHz with type IIIA compensation and have >60 degrees phase margin at crossover on the local opamp error amplifier loop and the full (error amplifier + power stage) loops; yet on the switching simulation, If I lower the GBWP to 100MHz, there is 4.7kHz ringing at the point where the output voltage reaches the target and the error amp kicks in with the regulation, and if I lower it below 50MHz, there is 4.7kHz oscillation from this point on. I think this means it's the error amplifier's local loop that's unstable, but the phase at 4.7kHz is about 250 degrees (it's in the frequency range between the compensator's zero 1+2 near the power stage's LC double pole and the compensator's pole 2 and 3 near the power stage's C|ESR zero) and the phase doesn't change much when the GBWP is adjusted, so I don't see why it oscillates at this frequency or why increasing the GBWP stops it.

One thing I wasn't sure about is that in the small signal linear model, I've treated the transformer as an ideal turns ratio, essentially only modeling a buck converter with its input voltage as the secondary voltage of the transformer. Is this where I'm going wrong?

Can anyone tell me what's likely to be the problem? I may post a screenshot of SPICE later if it will help, but I'm not at my own PC at the moment.

Cheers,
Matt

EDIT: Just for some more background info:
Switching frequency 100kHz, Crossover frequency 10kHz,
Power stage LC double pole frequency around 300Hz,
Power stage ESR zero frequency around 30kHz (IIRC)
 
Last edited:

It might be that your linear model is stable because it's allowed to ignore the true limits of the forward converter (duty cycle is limited to between 0% and 50%). Also the switching simulation may not be operating in CCM/DCM as expected.
 

Hi, thanks for the reply. I'm not entirely sure how to account for duty cycle limit in the linear model. My switching model limits the duty cycle by limiting the error voltage up to half of the ramp peak voltage, rather than allowing it through the whole range of ramp voltage and PWM output and then using logic blocks to throw away every other cycle like lots of real controllers seem to do.

I have messed around a lot with the circuit when I have had the chance and I now think the oscillation may be caused by the current limit comparator section. The inductor current increases to the limit, causing Vout to rise above its target value and Ve to peak at its upper limit; then the inductor discharges right to 0A, Vout falls, Ve falls to its lower limit and the cycle repeats. I don't know what to do about it or why increasing the voltage error amplifiers GBWP very high sorts it out, and I'd like to know.

**broken link removed**
**broken link removed**



For now though, I have put in a filter capacitor with a higher capacitance and ESR to get the ESR zero lower than the crossover frequency and used type II compensation. I found a seemingly unorthadox way to implement type II compensation by putting the compensation zero in the input part of the type III style compensator opamp instead of the feedback part (sorry for the lack of terminology), and then having the extra pole in the input part and zero in the feedback part equal to cancel each other out (which from what I've read seems like it won't work in the real world). I don't know why, but it seems this allows the opamp to have a lower GBWP than the way described in the IRF AN-1162 I linked in the first post, even though up to the GBWP pole the loop gain is identical. I got this from **broken link removed** although the description there of how it's supposed to work is brief. He puts the cancelling pole and zero at about the same frequency as the ESR zero for some reason (His ESR zero 3.6k, pole 3.6k and zero 3k, saying the first 2 cancel and the final zero does the compensation).

Below is the genomerics example and the same loop gain with the more typical type II circuit. What I don't really see is why the GBWP pole moves, meaning the latter needs a higher bandwidth opamp if the rest of it is the same.
**broken link removed**
**broken link removed**
 

You need to account for the effective phase shift due to the sampling frequency. This is = to 180deg*F/100kHz. The linear analysis will not show this effect. Look at the last plot for example. Cross over is at ~30khz so there is another 54deg phase shift you need to sketch in. What seems to be about 70d phase margin is, in reality, only 16deg. Most circuit analysis programs have a delay line which you should put in (at your 10us sampling) to show the true effect of a sampled system.
 
  • Like
Reactions: KX36

    KX36

    Points: 2
    Helpful Answer Positive Rating
Thanks for the answer. I'm not sure I understand correctly, but to me it seems you're saying the problem I'm having is extraneous phase shift due to a transient spice analysis being limited to a sampling rate. Sure enough, if I increase the sampling rate by decreasing the maximum timestep (below 5us in this simulation), things completely sort themselves out. Does this mean it wouldn't be an issue in a real analog circuit then and is just an artifact of the digital nature of SPICE?
 

**broken link removed**
In this image I think I can grasp your overall intent, except for a couple things. What is that arbitrary source Blim doing? What's controlling it? As far as I can tell your EA output is not connected, so I don't think it can be the source of the oscillation.

Your AC plots look okay for the most part, and I doubt the oscillation shown in the first image are really an issue with EA bandwidth. Looks like a more fundamental error leading to a relaxation oscillation. It you post your .asc file for the first simulation (the transient one) I could probably figure out the issue.
 

It's not the Spice speed. The sampling rate is your 100kHz switching frequency and so the Nyquist frequency is that same 100kHz. I came into power electronics design long ago from a control-theory background and started putting this effect into all my bode plots. On spice phase plots I always add this "Freq/Fsw*180 term. It explains a lot of instability problems after the designer concludes something like, "oh it must be tolerance problems" or something like that. I found that this effect is not well known even among seasoned power electronics engineers.
 
Last edited:

In this image I think I can grasp your overall intent, except for a couple things. What is that arbitrary source Blim doing? What's controlling it? As far as I can tell your EA output is not connected, so I don't think it can be the source of the oscillation.

Your AC plots look okay for the most part, and I doubt the oscillation shown in the first image are really an issue with EA bandwidth. Looks like a more fundamental error leading to a relaxation oscillation. It you post your .asc file for the first simulation (the transient one) I could probably figure out the issue.

Blim limits the ideal opamp output V(3) between 0V and Dmax*Vosc (0.5*5V=2.5V), so you can see it as part of the opamp It was originally there to limit the ieal opamp output to some power rails, but then I saw it was an easy way to limit the duty cycle too, although as I say this is not the usual way of limiting the duty cycle and lots of real controllers would have Ve between 0V and Vosc (i.e. double what my Ve is for a given duty cycle) and then half the duty cycle later with logic blanking half the PWM signals

As I have said, if I set the maximum timestep to 5us (SPICE sampling at 200kHz, twice the Nyquist frequency apparently), this oscillation disappears, and if I set it to 100ns another relatively insignificant apparent oscillation disappears, so I think this means it was actually stable all along but I'm not sure. Thankfully, DarrellTH seems to know all about it though. :)

It's not the Spice speed. The sampling rate is your 100kHz switching frequency and so the Nyquist frequency is that same 100kHz. I came into power electronics design long ago from a control-theory background and started putting this effect into all my bode plots. On spice phase plots I always add this "Freq/Fsw*180 term. It explains a lot of instability problems after the designer concludes something like, "oh it must be tolerance problems" or something like that. I found that this effect is not well known even among seasoned power electronics engineers.

Darrell, when you say "On spice phase plots I always add this "Freq/Fsw*180 term", is there a way to have SPICE do this or do you have to do it manually?

Thanks guys,
Matt
 

Blim limits the ideal opamp output V(3) between 0V and Dmax*Vosc (0.5*5V=2.5V), so you can see it as part of the opamp It was originally there to limit the ieal opamp output to some power rails, but then I saw it was an easy way to limit the duty cycle too, although as I say this is not the usual way of limiting the duty cycle and lots of real controllers would have Ve between 0V and Vosc (i.e. double what my Ve is for a given duty cycle) and then half the duty cycle later with logic blanking half the PWM signals
You can just do this by using the universal opamp model with supply rails...
As I have said, if I set the maximum timestep to 5us (SPICE sampling at 200kHz, twice the Nyquist frequency apparently), this oscillation disappears, and if I set it to 100ns another relatively insignificant apparent oscillation disappears, so I think this means it was actually stable all along but I'm not sure. Thankfully, DarrellTH seems to know all about it though. :)
5us is huge and will definitely screw up your results. When I simulate SMPS I generally use 100ns max step size, at most. Sometimes under 10ns for high frequency designs. It can really make a difference.
 

Yeah, before about 3 months ago I'd never really tried to simulate anything where high frequency content really mattered (e.g. where the stability of a feedback loop depended on it). Just one of those little things one has to get stuck on and learn from I suppose.

Thanks again guys, you've been really helpful!

Matt
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top