Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Energy meter accuracy question

Status
Not open for further replies.

King_Pago

Newbie level 4
Joined
Aug 9, 2014
Messages
5
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
49
Anyone,

Firstly I'm uncertain if this post actually fits into this category of this forum, so I apologise if it doesn't. Now to my questions: I've recently built a current sensing application for an energy meter: includes a hall-effect current sensor, anti-aliasing filter, level shifter, micro-processor (dsPIC30F4011) and an LCD. My intended measurement error for this application is a 2% deviation (above or/and below) from the actual current I read from the oscilloscope.

Here's my problem, for my present rating of 7A RMS, in a range of 7Arms - 4Arms, it exhibits what I anticipate it to and actually, sometimes, gives me a very accurate reading (at times even close to 100 %). Below this range the error increases - reaching even as far as 10%. If I try to fix it for this low range using proportionality factors in my code,i.e. adjusting multiplication factors etc. etc., it improves, but as soon as the actual current increases for range 4Arms-7Arms, the error pattern of earlier is now repeated again and exhibited for the higher current range now.

Here are my questions:

[1] Is this kind of behaviour typical for energy meters?
[2] Specifically, are energy meters designed to cater for loads (i.e. a fundamental load type) with a specific range of current wherein they'd be reliable and if another type of load with a current not of that range is connected their accuracy diminishes?

I've tried looking up literature on this, and have come up with zero, your response will be greatly appreciated.
 

Generally an accuracy rating is give as a percent of full scale. Thus a 1% error at full scale can translate to 10% error at 10% of full scale. Perhaps that is what you are seeing.

If you can use correction factors in you code then adjust the factors for the current you are reading. Thus little or no correction at the high current levels and more correction proportional to how low the current level is.
 

Is this kind of behaviour typical for energy meters?

No. You can review the datasheets of industry standard meter ICs, e.g. from Analog Devices. You'll notice that the requirement for current dynamic are quite demanding, reflecting the standards for calibrated energy meters used in electricity billing.

Hall effect sensors are rather linear, thus I guess the problem in your design may be simply the DC offset. Energy meters use typically AC current sensors and a highpass in the signal path to eliminate offsets. You should try the same. This should be a first order digital high pass after the ADC, and the cut-off frequency should be chosen so that the phase error doesn't affect the real power measurement.
 
Hi,

there are many possibilities.

* What metering IC do you use?
* You speak of "range". Does the meter switch different input current ranges? If sou you may calibrate every single range.
* Some metering ICs need to calbirate for mains frequency (needed for RMS calculations)
* Some metering ICs need to calibrate for U-I-phase shift.
* Some metering ICs need to calibrate for DC offset.
* If there is an input range overflow, then RMS value still changes with varying input voltage but not linearely anymore

My intended measurement error for this application is a 2% deviation (above or/and below) from the actual current I read from the oscilloscope.
With some scopes the displayed RMS value is horrible. Much more error than a metering IC. Also some include DC offset in RMS calculations some don´t.

Please give mor information about your circuit. What load, frequency and so on.


Klaus
 

chuckey,

Thank you for your response. How would a linearising array work to improve accurary in a current sensing application?

- - - Updated - - -

crutschow,

Thank you for your response. I do not understand what you mean by a full scale: is this my current rating (stated above as 7Arms)?

- - - Updated - - -

FvM,

Thank you for your response. It's very interesting that you speak of DC offsets and mention a filter to remove them because I've been doing something which is quite the opposite of what you're speaking of. It's the first time I hear of this.

With regards to dc offsets, since the dsPIC30F4011 can't accept negative voltages, I introduced a dc offset of 2.4V so the ADC can sample it properly.

However, my procedure for removing it has not been through a digital filter: I first sent the 2.4V dc signal into the adc and got a count value of 504, so on my code after sampling an ac signal, I just substract 504 from the converted sample and proceed to use the result for my rms calculations. Is this wrong? If it is wrong, why?
 
Last edited:

However, my procedure for removing it has not been through a digital filter: I first sent the 2.4V dc signal into the adc and got a count value of 504, so on my code after sampling an ac signal, I just substract 504 from the converted sample and proceed to use the result for my rms calculations. Is this wrong? If it is wrong, why?

The method isn't wrong generally, but relying on the offset being constant. A filter would be a way to correct the offset automatically.

I agree with KlausST that you shouldn't blindly trust the refrence current measurement from oscilloscope. At present we don't know much about the properties of the hall-sensor, the analog circuit or the caculation algorithms. No chance to narrow down the problem.
 

When sufficient samples per cycle are taken the average value can be taken as the zero crossing. unless there is a single polarity load.

The IEC standards should be similar to the ANSI stds.

e meter.jpg
 

A good measure when it comes to measuring RMS is define the sampling interval of reading at a slightly different rate of the mains frequency. The goal is to prevent any spike synchronous to the phase being not repeatedly overlooked, allowing the divergence to be reduced in the average. For example, for 50 Hz (= 1/20 ms) could be employed a sample of 19ms, so that after a cycle of 380ms calculation was completed.

Another aspect to consider is the crest factor of the signal being measured, and the system's ability to handle it, ie, you must know the nature of the input signal, if it is a good reference for the comparison above . Thus, the ideal is to use a purely sinusoidal voltage free of any interference or distortion.
 

It just a lookup table, so Pread (3.1KW) looks at row 31, and finds 37 which represents 3.7 KW which is displayed. Write a noddy to give linear interpolation between the look up points. i.e. for 3.12, look up 31 and 32, find 37 and 39, displayed value =37 + 2/10 X (39-37) = 37.4
Frank
 

In case of troubleshooting AC will also be useful for other equipment.
Cost of 50m copper for 4A and 1% drop or 0.12V is excessive. You would need total path length resistance for 100m = 0.12V/4A= 30 mΩ resulting in AWG 00.

AWG 00 0.26 mΩ/m
AWG 16 13.17 mΩ/m

AWG 18 20.95 mΩ/m *50m*4A =4.2V
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top