Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Difference between 180nm and 130nm with same W/L ratio

Status
Not open for further replies.

jaydnul

Newbie level 5
Joined
Oct 21, 2017
Messages
10
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
101
Im curious what this difference would be? If you sized both of them, say 5um/1um, would there be any performance differences? If so why?

Thanks!
 

Oxide thickness also tends to track nominal gate dimension
and remains a factor even at gross W, L. Reliable working
voltage is reduced, whether you gain or lose drive strength
will depend on how much (V-VT)^2 you can get before you
break or wear out the gate. VT is often moved lower to help
with the lesser max voltage, then costing you subthreshold
(incl "off") leakage.

If you were really curious you'd get down to cases because
there's lots of "knobs" that can be set / traded a lot of ways.
 
I guess I'm wondering why the oxide thickness tracks with gate dimension? If you have a lower oxide thickness, you'll have a bigger Cox and lower threshold, but why can't a 180nm process have the exact same oxide thickness as a 130nm process?

Also, take a 180nm process, the smallest pattern you can generate on the mask is usually something like 35nm, so why can't you make the polysilicon 35nm instead of 180nm? Is there something I am missing?
 

The lateral D-S field that can be stood off at Lmin will
determine the working voltage (subject to many features
that modify hot carrier effects). Any gate oxide thicker
than what's needed to reliably stand that max working
voltage (subject to blah blah) is just leaving performance
on the table. You can't really optimize one without the
other (you could go there but it will not be optimum).

Where density is king, geometry goes where it's led.
Shrinking the FET gate is proxy for many other co-shrinks.

In your final question, you could, but then it would be
(advertised as) a 35nm technology and would not live
at 1.8V core voltage. Maybe this is a don't-care (often
is). You trade mask & wafer cost, electrical performance
and functional density for the product under development,
to pick a capable least-cost (you hope) manufacturing
solution. If 180nm does the job, why go to 35nm at 4X
the wafer cost (rough guess) if you are not going to see
the billion-devices-per-year economies of scale on the
back end?
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top