Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Do you think I can make a smaller inductor than the commercial manufacturers?

Status
Not open for further replies.

David_

Advanced Member level 2
Joined
Dec 6, 2013
Messages
573
Helped
8
Reputation
16
Reaction score
8
Trophy points
1,308
Location
Sweden
Activity points
12,222
Hello.

I have a problem relating of the inconvenient sizes of inductors that can pass 40A, I am designing a buck-boost converter and size is one of the more important concern. But my calculations have indicated that I need a inductor that is at minimum 6,8µH and for other reasons I really don't want to go above the specified 25kHz that my converter are planned to switch at, not if it is avoidable.

I look at mouser for inductors and find that there are larger than what I had hopped especially since it looks like I am going to have to use two smaller inductors in series to reach the minimum 6.8µH, since the inductors that are so large(Henry wise) does have current ratings of...40A which is no good for passing 40A no?

So to get higher rated current I have to choose for example 3,3µH or 4,7µH, and I thought I should ask here if you think that it could be possible for me to construct a more space saving inductor my self or are those commercial components the smallest they can be?

Further more, in every example design I have looked at they calculate the inductor and if we pretend that the minimum value came out as 10µH then they are choosing something like 22µH or even 47µH, I know that they are deliberately choosing a much larger inductor but that behavioured makes me ask, is there any likely problem in using the calculated minimum inductor value for a buck or boost converter?

Regards

- - - Updated - - -

If you look at this, it has the required µH butnot the required current, while this has the required current rating while not the required µH rating. But two of those in series would almost make the value if assuming that the tolerances isn't negative...

There are others that are higher in µH but lower in current rating, such as this. Which would be rather expensive to use two of in series.
 

Further more, in every example design I have looked at they calculate the inductor and if we pretend that the minimum value came out as 10µH then they are choosing something like 22µH or even 47µH, I know that they are deliberately choosing a much larger inductor but that behavioured makes me ask, is there any likely problem in using the calculated minimum inductor value for a buck or boost converter?
The word "minimum" answers you. If you use the minimum, then you real inductor may have some less inductance than expected and hence your circuit is not working as expected. They are using some safety margins.

If you look at this, it has the required µH butnot the required current, while this has the required current rating while not the required µH rating. But two of those in series would almost make the value
Even making the value and with + tolerance, they are in series and hence both must be prepared for the required current rating.


if assuming that the tolerances isn't negative...
You can not assume that. You have to be prepared for the worst case. If the worst case does not suit your need, then, do not use it.
 
Last edited:

Just be aware that many of these commercial "inductors" are designed to be used only as high frequency ripple reduction chokes only, where they only have hundreds of millivolts at the most of noise voltage across them.

Attempting to use something like that in a buck boost converter will very likely either saturate or burn up the core or destroy switching devices. Its not the current, it is the voltage swing across the winding you should worry about.

To understand why, try to think what happens if you connect a mains transformer with a 110 volt primary across 600 volts ac.
It will hugely saturate and probably go bang.
Likewise a small dc choke with only a very few turns will saturate if it has a high ac voltage impressed across it, especially if it has absolutely minimal turns and small size.

There is more to this than selecting on the basis of just uH and amps.
You also need to think about voltage and frequency.
And sadly, it may require something much larger than you are really hoping for.
 

Warpspeed, it's current and only current that saturates an inductor. Perhaps you mean to emphasize that this means the peak of the ripple current rather than the average load current.

The inductor needs to be specified for both the average current and must not saturate at the peak ripple current that it's going to see. Core losses or an absolute voltage limit based on insulation might factor in too. If it meets all these specs you can use it.

I've recently been specifying parts in a very similar space and settled on coilcraft. The AGP series comes close to what you want but might just miss it:
https://www.coilcraft.com/dynamictables.cfm?product_group=High+Current+TH+Flat+Wire+Inductors

The larger version of these may come close to working in series. These are particularly dense in terms of volume:
https://www.coilcraft.com/dynamicta...=Molded High Current, Low DCR Power Inductors

I really recommend coilcraft because of their excellent website and as far as I can tell they're competitive in terms of size/performance. Though prices may be a bit higher.


To your original question, there is no magic in inductor design. It's copper wrapped around a core. Standard manufacturers face a lot of competition for packing these things in the smallest spaces and have tools and machines for manufacturing them that you're not likely to compete with. The main way that you might, is if your application is particularly special. But I don't think it is in this case.

Having spent quite a lot of time also trying to save space I found that the market is generally competitive meaning that best in class inductors have a size that correlates well to the energy they store. For example if you do find a single inductor that meets 6.8u 40A you'll find that its similar in size to two 3.3u's rated for 40A or two 14u's rated for 20A (used in parallel).

If size is really a major concern you have no choice but to increase the switching frequency.

Finally 25khz...6.8u does sound small, what's the input voltage and have you actually correctly modeled the ripple current and weighed the consequences of higher ripple?
 
Well its right in the data sheet and in the Inductance vs Current curves.

To be clear voltage leads to current, but it's a specific value of current that saturates a given inductor.

Same with a transformer which saturates when the magnetizing current reaches a specific value.
 

I am not going to waste my time telling you there is far more to designing magnetics for switching power supplies than just the inductance and the dc rated current.
 

Same with a transformer which saturates when the magnetizing current reaches a specific value.

The only thing that can saturate a transformer is voltage. As the primary voltage increases the flux increases. At some point the flux can no longer increase as the voltage increases and the primary stops acting like an inductor and starts acting like a wire and the current rises quickly. With ferrite this is a fairly sharp knee.
 

The only thing that can saturate a transformer is voltage.
More specifically it's flux respectively V*s integral which directly relates to mT core induction.

But for a given core and number of turns, flux has also a fixed relation to magnetizing current. In so far it's quite right to look at instantaneous inductor current to decide about saturation.
there is far more to designing magnetics for switching power supplies than just the inductance and the dc rated current
Particularly frequency and magnitude of AC magnetization.

Frequency and AC magnitude along with acceptable power dissipation must be considered when deciding about optimal inductor design. To answer the original question, it must be specified. As a first guess, you won't achieve considerable size reduction compared to industry standard storage inductors if your application parameters are in the main stream.
 

Yes the ac magnetization is one factor, core cross section, turns, and volt microseconds.

Also current density in the wire taking into account skin and proximity effects. The limit at 25 Khz is around 0.4mm and to carry 40 amps it will probably require a wide copper foil. Its very different to 40 amps dc.

And then we have eddy current and hysteresis losses in the core, and the resulting temperature rise, which incidentally lowers the threshold of saturation.

Then with buck boost we need to do all of this as well as come up with a final inductance, which inevitably requires an air gap, especially when the power level is high as 40 amps suggests it will be..

Plenty to think about. And a general purpose dc filter choke out of a catalogue is going to fall way short of what is required, even if size and temperature rise were not a consideration.
 

I overlooked that 25 kHz had been specified. So the short answer is, you won't end up with much smaller storage inductors than the catalog types available by Coilcraft and other major vendors. High flux powder cores or amorphous metal cut tape cores with litz wire winding may achieve smaller inductors - but also multiplies costs. Redesigning the converter for higher switching frequency is the way to go if size matters.
 

The only thing that can saturate a transformer is voltage. As the primary voltage increases the flux increases. At some point the flux can no longer increase as the voltage increases and the primary stops acting like an inductor and starts acting like a wire and the current rises quickly. With ferrite this is a fairly sharp knee.

To be clear:
Volt-seconds -> current -> flux

All three are closely related but particularly for a given design current and flux are locked such that you can't have current without flux, no matter how you get there.

On the other hand you can apply infinite volt-seconds in the form of a small DC bias to an inductor but if winding R prevents current from exceeding its rating it will never saturate. Hence its more direct to speak of current (magnetizing current for a transformer) when considering saturation.
 

My converter is to power a hand held device(the personal vaporizer that I have discussed other aspects of in other threads recently), so one of the main concerns for not increasing the frequency have been to keep the temperature low. Since lower frequency means greater efficiency but after reading the answer to my opening post I feel as I really have no choice but to increase the switching frequency.

It felt it to be weird from the beginning when I came to aim for as low switching frequency as possible, which I did because of assumptions or rather the lack of an explanation for why commercial products which is very similar and identical in it's function to the device in which this 40A non-inverting buck-boost converter is to be used have been using circuits which switched at something like 33Hz at first and then later devices increased that into a switching frequency of around 800Hz, and the only conclusion I was able to form was that it had to do with efficiency and/or managing the heat(this is itself built upon the assumption that these commercial devices had a good reason for there switching frequencies).

I have forgotten to make some measurements on the devises that I own to find out what switching frequency they use, it will be a little while before I can do that since I won't be able to set up my new scope for a few weeks.

So my wish to not increase the frequency is really only motivated by me wanting to ensure that my MCU have plenty of time to make decisions and to keep the duty cycle resolution high.

A introduction document about digital DC-DC converter control states that the duty cycle resolution is the most important parameter determining the performance of the digital controller, and since a higher switching frequency would mean a lower duty cycle resolution the output voltage steps would be larger but I think that I will try this design out with 50kHz instead of 25kHz.

I am in the process of making lots of iterative calculations to find out what a 25kHz vs 50kHz switching frequency will mean as far as output voltage steps go.

I will return when I have worked out what that means as far as inductor values,but I believe that someone asked me if I had taken ripple current into account, I don't remember which values I used but it was an equation from a TI design guide. Wait yes the ripple current was included.
I will have to look up the equations again now in any case so I'll know soon enough.

I have one thought though, seeing as the output of the buck-boost converter will only be used to supply power to a heating element I would think that the quality of the output isn't very important, but I might be wrong, I have also thought that it might matter due to the fact that I will need to measure the output voltage and current but I haven't gotten to test this design yet so I still don't know about that. I foresee that I have many more mistakes to make before this is anywhere close to done... :)

I'll have to re-read the answers tomorrow because I have so many different things in my thoughts that I am not sure if I have responded appropriately.

- - - Updated - - -

Come to think of it, changing from 25 to 50kHz aught to double the voltage steps.
 

Yes, based on your description I can't see why you need a whole lot of output timing resolution.

And note that you can have more effective output steps than pwm steps if you have additional bits in your feedback path. The result will have some "limit cycling" - noise where the output switches between two adjacent pwm counts (theoretically), but that may very well be tolerable and/or effectively filtered depending on the set-up.
 
  • Like
Reactions: David_

    David_

    Points: 2
    Helpful Answer Positive Rating
I don't think that with good modern components, 25kHz
ought to be your design limit. Only an IGBT would need to
be run so slow. 25kHz with any subharmonic activity is
going to be annoying to the user.
 
  • Like
Reactions: David_

    David_

    Points: 2
    Helpful Answer Positive Rating
Apparently the switcher runs the high current with voltage in the 100 mV range, a strong indication to overthink the whole concept.
 

The power is supplied by two parallel 18650 Li-Ion batteries, which have been tested so that the rating of 25A is correct.

I will have to get two inductors and use them in series, but the question of frequency is not yet settled.
I am forced to use two 6µH @ 50A(8USD) inductors, 16USD is more than I had hoped to have to spend on the inductor but it's no point in making this more difficult than it already is.

So what frequency do I choose... Since I am limited to 12µH for the inductor perhaps I should look at the effects such a inductance will have on the output voltage/current for different frequencies, but as have been suggested the output quality for this application is probably quite relaxed.
So maybe I should choose frequency based upon what sort of control it results in, all Atmel XMEGA µC running with a supply voltage of 3,3V can operate with it's internal 32MHz clock and it is the highest clock allowed according to the datasheet, but I have talked to people whom claim that they have been using there XMEGA without any problems what so ever at as high as 72MHz. So that 32MHz can't be exceeded isn't a question, but I feel as I really want to use an external crystal oscillator in order to realize maybe something like 50MHz in order to be able to increase the switching frequency while still maintaining the duty-cycle resolution.

Though I have actually no insight into what sort of duty cycle resolution results in what sort of control, since that is something that someone like me can only know after having chosen a clock and switching frequency and see what happens, but I would really like to try to ask for advice about this decision.

I am reading a document named "A Practical Introduction to Digital Power Supply Control" by Laszlo Balogh, in which it says that the duty cycle resolution is the most important parameter for this situation.

The number of clock cycles per switching period is given by:
CPUcycles/FSW period = (1/FSW)/(1/FCPU)

But here it gets messy in my head, in the document it says this:
"the time difference between two neighbouring duty ratio values is constant and it equals 1/fclk. The same duty cycle step will result different output voltage changes depending on the nominal operating duty cycle of the converter. In other words, increasing the conduction time of the power supply's main switch by one clock period (1/fclk) at narrow operating duty ratio will raise the converter's output voltage more than the same change applied at wider nominal duty cycles."

Which is all very clear if you have a nominal input voltage and nominal output voltage(or nominal duty cycle) but my input voltage varies and my output voltage varies which makes it all that much harder to think about.

In any case, if I want an switching frequency of 100kHz, then if I stay with a 32MHz clock the I have 320 clock cycles each switching period but if I choose to get a crystal and implement a 50MHz clock then I get instead 500 cycles per period. I can't at the moment grasp the implication of those numbers though.
 

Well, taking the 500 number that means you can have 500 possible output duty cycle values which means 500 possible output voltages for a given input voltage.

I'd make a spreadsheet that calculates all this in a discrete fashion. I.E. a column with 500 values for an on time of 1-500 and then calculate the output voltage each duty cycle would result in. You could have multiple output columns for your different possible input voltages. This approach should make things fairly clear including the non-linear voltage steps in the boost topology.


In my experience 8 bits or so (256) of output resolution is pretty good. I did an inverter that only had roughly that and it was able, with the help of a very precise outer feedback loop, to hit better than 0.1% accuracy (1/1000). Though an AC waveform is a bit different than DC here.
 
  • Like
Reactions: David_

    David_

    Points: 2
    Helpful Answer Positive Rating
I am really not sure what I should do, should I open a new thread or should I continue the conversation here?

In any case I have finally come around to look closer at the output values for different clock speeds vs switching speeds.

For some reason I thought that the voltage steps would change over the timer values but apparently the step size stays the same and it is only if it is related to individual voltage levels that the step size can be said t change.
Although I should probably have been able to work that out without making the calculations but I would never have been able to do so giving how fussy everything in my mind is.

But I have to make a choice, if I let the clock speed be the one provided by my µC which is 32MHz then the voltage step size will start out as 0,0263V/timer tick and it will end up as 0,02V/timer tick(I can't think of a better word for it right now).

Or I use an external crystal to drive the µC at 42MHz and then it starts at 0,02V/timer tick and ends up with 0,0152V/timer tick.

I am not entirely sure about this but I think that the XMEGA have an internal function to raise the clock frequency to whatever value you want so 42MHz could be possible, it has a calibration feature called DFLL that if my memory serves me right can be used to generate whatever clock frequency you want although it is meant to calibrate out drift from the clock. I checked the datasheet quickly although I don't know exactly how I am now sure that it supports generation of higher clock frequencies somehow.

But in any case I need to understand a way of relating to the output voltage as the voltage step size changes from 0,02V when the batteries are full at 8,4V and 0,0152V per step when the batteries are drained at 6,4V.

I a having difficulties to visualise how this would behave and how to best manage it, do you have any suggestion?

- - - Updated - - -

But I can't trust those step values to be show up in the real world, can I?
I mean the efficiency isn't considered in the calculation which was:

VIN * n * (FSW / FCLK)

Where: n = the timer value.

When calculating duty cycle for a buck converter some text say it is:

D = VOUT / VIN

While others say that the following is more realistic:

D = VOUT / (VIN * η)

Which can make quite a big difference if the efficiency is as low as 85%.

And that makes me think that the efficiency might have a role to play in that first equation I used to calculate the output voltage step size?
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top