Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[Amplifier compensation] Nulling resistance realisation with mosfet

Status
Not open for further replies.

CAMALEAO

Full Member level 4
Joined
Jul 29, 2016
Messages
201
Helped
2
Reputation
4
Reaction score
2
Trophy points
18
Activity points
1,868
Hi everyone,

I have designed a simple 2 stage amplifier. For this amplifier, I did a small experiment. Using the traditional compensation with a simple resistance and using a MOSFET.

I got one value for the resistance which gave me a reasonable phase margin and all that. Then, when I implemented the compensation with the MOSFET, I put more or less the same resistance value.

However, I got a different phase margin which at first sight doesn't make sense. Then I drew a small equivalent circuit using only the parasitics capacitances of the MOSFET in the second stage and the mosfet used as a resistor and I saw that the problem could be related to this.

Has anyone experienced this? Do you guys normally use this kind of compensation with a MOSFET as a resistor? Can you share your thoughts on this?

Regards.
 

Where do you connect this mosfet (output of 1st or 2nd stage)? What is a ratio of this mosfet gate capacitance to load capacitance and output stage capacitance?

Moreover, the IC components has terrible low absolute values accuracy (like ±20%). This mosfet should be matched in an output stage one (and with current mirror loading first stage).
 

I have seen this done in multiple CMOS op amps. The
biasing is critical, and the nonlinear behavior may hurt
large signal distortion etc. in ways that small signal
analysis will not show.

Pluses are, high resistance for low area and the possibility
that the compensation corner can track (or "compensate")
variations in gain, BW from the other amplifier devices as
it's done with the same.

I don't prefer it myself, always found it kind of touchy.
But I tend to work on stuff with wider temp range, "other"
unpleasant environments and sloppy fabs. In a fab that
ran real tight, for small signal and caring about every
square micron, though, it's an often-done thing.

The self-capacitance of the device (thin ox area, that
is returned to someplace (=?)) can be a new issue you
introduce by doing this. Pushing output signal (or prior
stage, high swing still) back onto your bias network
can give peculiar behaviors that take some thought to
untangle. If you go this way, might want a more isolated
bias to the "resistor" device's gate.
 

Thanks for the replies.

Dick, first: based on your response I guess that there is not a straight answer to the problems that using the mosfet as a resistance can cause to the circuit, is that it? Do you know any kind of study that can have been made in the past about this? Second: so one way to avoid the unwanted effects in the biasing side is to use isolated MOSFET devices for the transistor acting as a resistor and for the transistors in the biasing network? For example using triple well devices? If it is not, can you explain in other words what you mean by "might want a more isolated bias to the "resistor" device's gate."?

Thanks.
 

Re "isolated", I mean that (for example) the gate of the
FET-as-resistor should not be tied to one of the prime
bias rails (where output activity would push back into
it). Instead make a separate branch with its own current
feed and Vgs-stack that gives the same position, but
where fed-back current will affect -only- the FET gate
bias node and nobody else.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top