+ Post New Thread
Results 1 to 4 of 4
  1. #1
    Newbie level 4
    Points: 330, Level: 3

    Join Date
    Oct 2017
    Posts
    7
    Helped
    0 / 0
    Points
    330
    Level
    3

    Difference between 180nm and 130nm with same W/L ratio

    Im curious what this difference would be? If you sized both of them, say 5um/1um, would there be any performance differences? If so why?

    Thanks!

    •   AltAdvertisment

        
       

  2. #2
    Advanced Member level 5
    Points: 38,807, Level: 48

    Join Date
    Mar 2008
    Location
    USA
    Posts
    6,278
    Helped
    1823 / 1823
    Points
    38,807
    Level
    48

    Re: Difference between 180nm and 130nm with same W/L ratio

    Oxide thickness also tends to track nominal gate dimension
    and remains a factor even at gross W, L. Reliable working
    voltage is reduced, whether you gain or lose drive strength
    will depend on how much (V-VT)^2 you can get before you
    break or wear out the gate. VT is often moved lower to help
    with the lesser max voltage, then costing you subthreshold
    (incl "off") leakage.

    If you were really curious you'd get down to cases because
    there's lots of "knobs" that can be set / traded a lot of ways.


    1 members found this post helpful.

    •   AltAdvertisment

        
       

  3. #3
    Newbie level 4
    Points: 330, Level: 3

    Join Date
    Oct 2017
    Posts
    7
    Helped
    0 / 0
    Points
    330
    Level
    3

    Re: Difference between 180nm and 130nm with same W/L ratio

    I guess I'm wondering why the oxide thickness tracks with gate dimension? If you have a lower oxide thickness, you'll have a bigger Cox and lower threshold, but why can't a 180nm process have the exact same oxide thickness as a 130nm process?

    Also, take a 180nm process, the smallest pattern you can generate on the mask is usually something like 35nm, so why can't you make the polysilicon 35nm instead of 180nm? Is there something I am missing?



    •   AltAdvertisment

        
       

  4. #4
    Advanced Member level 5
    Points: 38,807, Level: 48

    Join Date
    Mar 2008
    Location
    USA
    Posts
    6,278
    Helped
    1823 / 1823
    Points
    38,807
    Level
    48

    Re: Difference between 180nm and 130nm with same W/L ratio

    The lateral D-S field that can be stood off at Lmin will
    determine the working voltage (subject to many features
    that modify hot carrier effects). Any gate oxide thicker
    than what's needed to reliably stand that max working
    voltage (subject to blah blah) is just leaving performance
    on the table. You can't really optimize one without the
    other (you could go there but it will not be optimum).

    Where density is king, geometry goes where it's led.
    Shrinking the FET gate is proxy for many other co-shrinks.

    In your final question, you could, but then it would be
    (advertised as) a 35nm technology and would not live
    at 1.8V core voltage. Maybe this is a don't-care (often
    is). You trade mask & wafer cost, electrical performance
    and functional density for the product under development,
    to pick a capable least-cost (you hope) manufacturing
    solution. If 180nm does the job, why go to 35nm at 4X
    the wafer cost (rough guess) if you are not going to see
    the billion-devices-per-year economies of scale on the
    back end?



--[[ ]]--