Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

LED load simulator for stability testing

Status
Not open for further replies.
T

treez

Guest
Hello,
We wish to test out 3 channel led driver. (all three led drivers are on the same PCB)
Each channel drives 165W of LEDs (46V, 3.52A)
Please could you explain if the V/I characteristic of the 63110A LED load simulator is close enough the that of the LEDs so that we can use it for stability testing of the led driver?

63110A LED load simulator:
https://www.chromausa.com/pdf/63110a-instruction-demo-ver-4.pdf
 

Hi,

what does the LED datasheet say?

Klaus
 
  • Like
Reactions: treez

    T

    Points: 2
    Helpful Answer Positive Rating
here is the led datasheet attached..
 

Attachments

  • LG 395nm diodes _1090X1090_SPECIFICATIONS_V2 0 _LEUV-V512A6_395mm.pdf
    576.5 KB · Views: 114

Hi,

the LEDs are 3.5V and 350mA rated.
The dynamic impedance at 350mA is about (3.6V -3.4V) / (0.4A -0.2A)= 0.2V / 0.2A = 1 Ohm.
(my estimation: it should be able to simulate the LEDs characteristic)

In the load simulator instructions manual there are no technical specifications about range of:
Voltage, current resistance.
Look for the datasheet. or contact chromausa

Klaus
 
  • Like
Reactions: treez

    T

    Points: 2
    Helpful Answer Positive Rating
Thanks, so you mean we can dial up the dynamic resistance we want on the equipment?..is that what the "Rd" figure was in the manual?
Also, the manual says the advantage of using a led simulator is that it makes it more convenient for starting up into, but says nothing about confirming stability of the led driver....why don't they mention the stability checking that you get over just using a resistive load of the same power level?
 

Why do you need a simulator, when you have good LEDs to use as a std load?

Do you need production test jigs for nominal and worst case LED's with an active load?

  • The main reason it wont simulate it due to the Shockley effect of your heat sink and since you haven't measure the junction temp yet using Vf,
  • any simulation will be wrong unless physical/ electrical thermal resistance & tempco constants are added. ( especially critical when driving over nominal rated)
  • You can estimate the tolerance on ESR from the 3.2~3.8V range
  • This range is only at a fixed temp 25C I believe, unlike Cree which uses a more useful value of 85'C for White LEDs.
  • It means datasheet tests it with a ~1ms pulse so self-heating does not occur.

The ESR or Rs of this part is closer to 630 mΩ nominal @350mA but can increase to 1.7Ω worst case in top Vf bin
from my curve fit as shown in graph, with a Vf=3.8 V @350 mA @25'C


With <48V this means you have 13 typ or 14 max in a string.

I graphed it for you

ESR.jpg
The above is for LG UV LEDs & the same can be done for RGB.

Both ESR & Vf are multiplied by the number of devices in series and divided by the number of strings in parallel.


Also applicable

Thus assuming 13 LEDs in a string @0.55 Ohm @ 440mA the String ESR is 7.15Ohm.

UV Vf nom=3.5V+\-0.3 @350mA @0.6Ohm ESR @25'C

White Vf nom=3.1 for good LEDs @25'C and 2.9Vnom @85'C

Your question in regards to the simulator works well at a fixed temp, your design is far from a fixed temp., but a rough estimate could be made to make it work.

Alas it is not 100% accurate to account for the 200 to 400mV drop per LED expected for a good and bad design respectively, rising 60 or 120deg C from 25'C.
 
  • Like
Reactions: treez

    T

    Points: 2
    Helpful Answer Positive Rating
Our Input voltage to the LED driver is 48V. It is a buck LED driver and has a maximum duty cycle so as you know we cannot drive led strings that are 48V or near to 48V.
With some rigs we drop 1V in the long cable from the 48V PSU to the LED driver so the input voltage to the LED driver is only really 47V.
The leds in each channel are 8 parallel strings of 12-in-series ….and they sit on a small bit of MCPCB as you’d expect. –there are three channels, so there’e three bits of this MCPCB with this 8x12 array on them. These three MCPCB pieces are stuck to a large, water cooled heatsink.

Why do you need a simulator, when you have good LEDs to use as a std load?

Yes but we need to test the leds that are 3.2V in Vf, and we also need to test the leds that are 3.9V in Vf (3.9v not 3.8v because we drive them at 440mA). Its going to be virtually impossible to be sure of getting led loads with either of those vf’s. The 3.9V vf led load is needed because it means the led driver is at max duty cycle, and is the worst case for stability…..in other words, if the array containing 3.9v vf leds is stable, then the array containing leds of lesser vf than that will also be stable (I’m speaking about the gain and phase margin of the led driver here)

We also need to test for subharmonic oscillation of the led driver, and we need to do this with minimum vf led array and max vf led array…the driver chip we are using has internal slope compensation but does not say what it is, and the semiconductor company wont tell us what it is.

SunnySkyGuy I believe what you yourself term “ESR” is what I term “Rd” (dynamic resistance)

Your question in regards to the simulator works well at a fixed temp, your design is far from a fixed temp., but a rough estimate could be made to make it work.

OK thanks, we appreciate the simulator is less ideal than the leds themselves, but far far better than a resistive load. We will use leds to test the driver but they will probably end up being 3.5-3.6v leds.

This range is only at a fixed temp 25C I believe, unlike Cree which uses a more useful value of 85'C for White LEDs.


….we think its disappointing that the datasheet doesn’t tell how the leds were mounted for their vf measurement, , it just says , as you say, that it was Ta=25c…..i mean, if they had them mounted on a small heatsink then the vf would be smaller than if they had mounted them on a large water cooled heatsink for the vf test, because the led junction temperature would be higher with water cooled….basically, as you describe, they don’t tell what was the junction temperature which is disappointing.

It means datasheet tests it with a ~1ms pulse so self-heating does not occur
…oops sorry, so you are saying that they did the vf test by just pulsing them at very low duty cycle with 350mA, so that the junction temperature was 25degc when they measured vf?
This, as you allude, sounds like a very non-useful way to do it.

The main reason it wont simulate it due to the Shockley effect of your heat sink and since you haven't measure the junction temp yet using Vf,

we measure junction temperature by putting thermocouple stuck to the MCPCB as close to the leds as possible, then take that as the case temperature, then use the quoted Tjc figure in the datasheet to get the junction temperature…..the method of using vf to measure junction temp we have heard of though it sounds rather time consuming to do

- - - Updated - - -

It means datasheet tests it with a ~1ms pulse so self-heating does not occur
SunnySkyGuy, do you mean that junction temperature was 25degc for their forward voltage measurements?
 

Re: LED stability testing and DVT

I gave you all the info missing in the datasheet to make a perfect worst case test jig grpahical computations for Rd or ESR(If). Variations of Rd were computed from their Vf range @ 0.35A which reduces in range @0.44A

You can calibrate LEDs in oven at low If (10mA) vs Temp , but -200mV per 60'C rise is expected per LED x12. Then pulse from 440 to 10mA to read temp by bypassing current with 430mA pulse shunt to keep driver CC stable during pulse.

You can make gold standard test jig then duplicate "silver" stds to any Rd@ 350mA by adding approx 1 Ohm to make up difference to get 3.8V at25C or 3.6@85C @0.35A or 0.42A worst case expected. For added margin consider NoGo with more.

You might be able to tune 48 to 48.5V for more headroom, but still not get 600W below 100Vac

If you characterize thermal response of all components., you can design automated stability VI regulation tester in <1 second vs long term wait for heat rise for NOGO tester @20-25C.

Best practise for a Test Engineer is follow worst case environment specs, test with fault injection and validate every parameter in Design Spec. which you make. This is called DVT. 1page per test. Eg. Power on surge Test, AC sag test and look for stress points to monitor.

I did these DVT's for many HDD's in 80's also done also by OEM's in Cali & Japan, then you learn the weakest links of a design and margin to failure by using HALT/HASS test methods. Eg inject vibration, low voltage, high ambient & high humidity& high VinAC or power cycling 10k times for life test. Then we chose the best Designs for reliability and performance in our systems.

When I designed rack or Lucent, I used Lambda PSU but added thermistor to hot spot on PSU to regulate cooling fans for <3$ BOM cost. You might need this for other reasons, such as to guarantee stability and prevent PSU or PWM driver oscillation. Subharmonic margins can be tested with BODE Plots in DVT.
 
  • Like
Reactions: treez

    T

    Points: 2
    Helpful Answer Positive Rating
Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top