Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Buck Regulator efficiency vs. Input voltage

Status
Not open for further replies.

faisal78

Member level 3
Joined
Aug 27, 2004
Messages
62
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,286
Activity points
592
Hello friends,
I was looking in regards to buck regulators and i am wondering why is it that, for a fixed output voltage/output load.... the regulator is more efficient with lower Vin, as opposed to higher Vin.
e.g. https://www.ti.com/lit/ds/symlink/tps62590-q1.pdf datasheets.
See Figure 2, there is ~5% efficiency difference for Vin=5v vs. Vin=2.7v for Vout=1.8v.
I had always thought, at higher input voltages, the duty cycles will be smaller, hence better efficinecy, since its 'sleeping' more.

The reason for me asking this is, I have a 7.4v lithium ion battery which I have to power a 1.8v MCU.
There are also a couple of other 2.8v regulators in my system.
Most power regulatos have a high limit of 5.5v, so I need a pre-buck regulator prior to seeing the lower voltage bucks/LDOs.

So I have two bucks in series from 7.4v, as an intermediate step down.
I am trying to find out the most sweet spot for this intermediate buck regulator.
 

To rigorously calculate the efficiency, switching loss, conduction loss and quiescent loss should be separately formulated, and it includes over ten contributors. Intuitively, the current in the battery internal resistance is 1/D times that of load current, the dissipation thereby will be larger for higher inputs.

Whatsoever, Your conclusion is right based on my experience.
 

I had always thought, at higher input voltages, the duty cycles will be smaller, hence better efficinecy, since its 'sleeping' more.
Figure 2 is describing continuous mode. A synchronous buck converter is not sleeping. The output is switching with an almost constant duty cycle according to the voltage ratio. The inductor flux and ripple current is independent of output current but increases with input voltage. Besides internal chip quiscent current and gate driver losses, transistor switching losses and core losses will cause a constant, input voltage dependent power dissipation amount.
 

Hi
thanks for the replies. It would seem just like LDO's, most buck regulators are more efficient with lower Vin due to the internal NFET/PFET losses and inductor losses.

Another question, is for a single cell design (LiION) from 3-4.2v operation, with a constant power load which requires a fixed 3.6v input, would it make a huge difference if I use a boost only regulator or a buck-boost; both being configured to be a fixed 3.6v output.

For the Boost regulator output, as long as B+>3.6 it will be in 100% cycle passthu mode, when B+=<3.6 it will start boosting.
The reason for this is, I do not want to lose the ~10% buck-boost regulator efficiency during the buck stage.

Ideas? Concerns?
 

Steady state input output characteristic of a boost shows you that the PFET will keep turning on when input is equal 3.6V, the condition continues kept when larger than 3.6V, for reasonable load and Rdson_P the output will be larger than 3.6V and will wreck the design.

If you want it to evolve into a linear regulator when Vin is higher, you effort is costy, but will still with lower efficiency than a PWM Buck, which pushes you back to use the Buck-Boost topology.
 

You have substantial CV^2f losses in the powertrain
that higher VIN quadratically increases. Offset to some
extent by superior output switch resistance and lowered
input current I*R losses at better gate drive levels.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top