Hi all,

I'm having trouble understanding how 1/f noise is modeled in LTSpice. This came about because I'm doing some analytical noise modeling for a photodetector preamplifier circuit, and I wanted to support this with LTSpice simulations. What I was after was a simple transformation from the SPICE MOSFET flicker noise parameter (KF) to the flicker noise constants in the equations I'm using in my analytical model. I was getting some inconsistencies in trying to compare the two, so for the next step I tried to simplify things and just understand the SPICE results for a single MOSFET. This is where I started to get myself really confused.

Attached is the circuit I'm using for the simulation. What I was trying to do was simply generate a noise spectrum, and then from the resulting data go back and derive the KF parameter to make sure it matched what I was specifying. I ended up with a number that was about 5 orders of magnitude too low.

I specified in my model a level one MOSFET, which according to the references I've looked at, uses the following expression for drain-source noise current:



To extract KF from the simulated data, I exported the noise voltage density at the output (Vo), and then, using MATLAB for all calculations, converted that noise voltage to a drain current by dividing by the drain resistance R1. I then applied a linear fit to the 1/f region of the log (base 10) of the noise current squared versus log (base 10) of frequency (see attached output plot). Ignoring the thermal noise which is insignificant relative to 1/f noise in this range, the intercept of that fit is then:



In my NMOS model, I specified the necessary parameters to uniquely determine Cox and Leff (TOX, L, and LD), and Id is about 10 mA. When I plug in the numbers, I get KF ~= 5E-31, instead of 1E-26 which I specified in the model. I'm completely stumped as to why the results are not matching the input

Any advice would be greatly appreciated,

Thanks!