Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Making a low current ammeter, I need some advice

Status
Not open for further replies.

Plecto

Full Member level 5
Full Member level 5
Joined
Jan 4, 2012
Messages
315
Helped
1
Reputation
2
Reaction score
1
Trophy points
1,298
Visit site
Activity points
4,979
Hi. I'm trying to make an ammeter that can measure at least uA and hopefully hundreds of nA, but I feel that I'm on thin ice. I've chosen a 18-bit ADC called MCP3421 (**broken link removed**), but because of it's sample capacitor it was advised to use an input buffer. I bought a low input offset voltage op-amp called AD8628 (https://www.analog.com/static/imported-files/data_sheets/AD8628_8629_8630.pdf) in hope that it will suffice. The datasheet brags about it's low drift, bias current and offset voltage so I hope it was a good choice. The op-amp has a typical offset voltage of 1uV and this makes me wonder. Unless I have a very large shunt resistor, 1uV of offset could seriously damage the measurements, wouldn't it? Also, the offset voltage is a voltage seen at the input of the op-amp when the input voltage is in reality 0V so the higher the gain, the higher the output voltage caused by the offset voltage, right? Does this mean that I would ideally want a gain of 1? I also assume that the board layout is important, but I'm not sure which paths that should be as short and thick as possible. I have now designed the board so that the shunt resistor, R3 and op-amp ground are as close as possible, there is also a huge copper pad going from this point to the negative input of the ADC (about 4.8mm). The issue is the path going from the output of the op-amp to the + input of the ADC, this path is 21mm long (currently 16mil thick). I'm not sure if this will have any consequences as I can't really see it getting a voltage drop of significance. Here's a schematic of the op-amp and adc:

21c5hmx.jpg


I thought about using the op-amp in a differential configuration, but is this really necessary if all the grounds are connected tightly together? I would really like some advice on this project :)
 

The offset appears as a fixed DC voltage at the input. It does not change with signal level. It is multiplied by the DC gain of the amp. It's difficult to get input offset below 1µV. Thermoelectric voltages in the solder joints can be greater than that. For example a tin-lead solder joint will generate about a 5µV/°C voltage for a temperature difference across the joint.

The connection of concern is the ground path between the shunt resistor ground (R1 on your schematic), the R3 ground, and the ADC input ground. Make sure those don't carry any power supply current.

The best way is to avoid any ground issues is to use a Kelvin (4-wire) connection to R1 using a differential input op amp circuit.
 

Thanks for the reply. I'm actually now wondering if the op-amp is really necessary. I just replaced the 10Ohm shunt resistor with a 100Ohm and connected a 30Mohm resistor from +5V and through the shunt resistor, the values I got was about the same as the theoretical value which I'm quite happy about. The adc has a 3.2pF input sampling capacitor and according to the datasheet, it takes the same time as the sample time to charge it (267ms). While I don't understand how the sample frequency and conversion frequency can be the same, 267ms is a lot of time to charge a capacitor that small. The datasheet explains that the input resistance should ideally be zero, but the way I see, there can be plenty of input resistance without hindering the charging of the cap too much. Could someone shed some light on this?
 

1LSB out of 18bits is 1 part per 226,000. According to pg. 12 of the data sheet, the A/D input impedance is 2.5MΩ/PGA where PGA is the A/D input amplifier gain. The minimum input impedance for a maximum PGA gain of 8 is thus 281K. The source impedance must therefore be less the 281k/226,000 = 1.24Ω so that the source impedance doesn't cause more than 1LSB of error. That's why they say the source impedance should be near zero. Of course if you need less than 18 bits of accuracy you can safely use a higher source resistance. For example 100 ohms would give an equivalent error of about 1LSB out of 11.5bits (.035%), still a pretty small error.
 

I'm afraid I don't quite understand. Yesterday I had an input impedance of 30Mohm, the accuracy should have been 281k/30M=9.36millisteps which would be way beyond non functional, no? Also, why does the number of bits matter when it comes to the input impedance? Isn't the point to charge the sample capacitor, then disconnect the cap from the input and then start the conversion? The circuit that does the conversion only cares about the charge voltage of the cap, not the input impedance? I also don't know how they come up with the number 2.5Mohm/PGA. They say that the input impedance is due to the sample capacitor alone so if the input of the adc looks like the image below, wouldn't it be wrong to call this and impedance? Unless we are talking about AC where the a reactance of 281kOhm would require a frequency of 177kHz.

2430v1d.jpg
 

In your circuit above, you have left out the resistance of the current source charging C1 and the load resistance discharging it. The ratio of the two gives you the accuracy of the voltage transfer, and as said for an 18 bit conversion, you need less then a one part in 226,000 inaccuracy to ruin the need for an 18 bit converter. Start off the other way around, how accurate do you need to measure your current to? - 1%?, .01%?, then decide on the number of bits you need to get this accuracy, then work out the current shunt.
Frank
 

Why is there a current source charging C1? The datasheet states "This input impedance is due to 3.2 pF internal input sampling capacitor", I see no mention of a current source. Also, why would there be anything discharging the capacitor other than leakage current and the impedance of the conversion circuit? The capacitor will naturally charge/discharge when SW1 is closed. I would really like to see the schematic of the input of this adc :p I still don't understand how the accuracy is dependent of the impedance in the input path :( About the preferred accuracy, I have no demands, I just want it as accurate as possible with reasonable means. I think it would be acceptable to have an accuracy of 1% of the whole ammeter.
 

I still don't understand how the accuracy is dependent of the impedance in the input path .
There is one other factor that relates to the impedance of the input path, and that is acquisiton time. Normally they try to make A/D converters acquire the voltage as fast as possible. Otherwise the application will be burdened by unreasonable delay times. But if your application does not require very fast response, you can trade off that input path impedance with acquisition time.

An A/D conversion is composed of two distinct phases. The first phase is the acquisition time. During that time the SW1 in your circuit is closed and the input capacitance can charge up to be equal to the signal voltage being sampled. The second phase is the actual conversion to digital in which SW1 is opened and the converter does whatever it needs to do (successive approximation or dual slope). Since the first phase of the process is normally time-limited, the impedance of the input path determines how completely the capacitor charges to the ultimate value. In some A/D converters this acquisition time is fixed. But in others it is programmable, often starting when the application software sets the input channel (assuming there is a multiplexor on the input). If yours is programmable, or if you never switch channels, there is a good chance you can decrease the importance of the impedance of the input path by providing a long enough acquisition phase.
 

An A/D conversion is composed of two distinct phases. The first phase is the acquisition time. During that time the SW1 in your circuit is closed and the input capacitance can charge up to be equal to the signal voltage being sampled. The second phase is the actual conversion to digital in which SW1 is opened and the converter does whatever it needs to do (successive approximation or dual slope). Since the first phase of the process is normally time-limited, the impedance of the input path determines how completely the capacitor charges to the ultimate value. In some A/D converters this acquisition time is fixed. But in others it is programmable, often starting when the application software sets the input channel (assuming there is a multiplexor on the input). If yours is programmable, or if you never switch channels, there is a good chance you can decrease the importance of the impedance of the input path by providing a long enough acquisition phase.

This is exactly what I thought. Two questions though, the datasheet states:

The MCP3421 uses a switched-capacitor input stage
using a 3.2 pF sampling capacitor. This capacitor is
switched (charged and discharged) at a rate of the
sampling frequency that is generated by the on-board
clock.

The adc can be programmed to 3.75, 15, 60 or 240 samples per second, it can also to set to "single conversion mode" where the adc will go in a low power mode until it's called upon to do a conversion. What you just said was exactly what I thought, I should then be able to find a simple RC time constant and then calculate the time it takes to charge the capacitor to a value that is sufficient for the desired resolution and accuracy. As I stated before, it doesn't take long to charge a 3.2pF capacitor. The datasheet says that the capacitor is charged and discharged at the rate of the sample frequency, but if one conversion takes 267ms, how much of this time is spent on charging the cap and how much is spent on actually converting the voltage? Also, when using the one-shot conversion mode will not help as SW1 will be open while the ADC is sleeping. The datasheet states the following:

Since the sampling capacitor is only switching to the
input pins during a conversion process, the above input
impedance is only valid during conversion periods. In a
low power standby mode, the above impedance is not
presented at the input pins. Therefore, only a leakage
current due to ESD diode is presented at the input pins.
The conversion accuracy can be affected by the input
signal source impedance when any external circuit is
connected to the input pins. The source impedance
adds to the internal impedance and directly affects the
time required to charge the internal sampling capacitor.
Therefore, a large input source impedance connected
to the input pins can increase the system performance
errors such as offset, gain, and integral nonlinearity
(INL) errors. Ideally, the input source impedance
should be zero. This can be achievable by using an
operational amplifier with a closed-loop output
impedance of tens of ohms.

Still though, how can the acquisition time due to an RC circuit (input path impedance and charge capacitor) be translated to an impedance? And how can the PGA alter this impedance? From all I'm reading, the internal amplifier is behind the sample capacitor (behind SW2) so I can't see how it can be involved in charging the sample capacitor.
 

Plecto, the current source is what charges the capacitor, no current source, no charge on capacitor, no voltage on capacitor, nothing to amplify. :) the current source must come from some where with a finite impedance, else the charging current would be infinite!
If you are after better then 1%, then a 10 bits AD converter would be enough.
Frank
 

I have a read a little bit about sample and hold circuits and I see no mention of a current source. Why should this be needed? Whatever connected to the input will charge the capacitor, I can't see what good a current source would do. If there is anything charging the capacitor surely it would be a buffer? But the datasheet states that there is no buffer.

A 10bit ADC would probably suffice, but for that I would need many measuring ranges (I want it to measure up to 4A).
 

Current source in this case means a source of current, not that it's a constant current type source. The current is provided by the source voltage.

To cover a wide range you may need to switch the shunt resistor to different values so you get adequate signal at the low current levels and don't dissipated too much power for the higher current levels.
 

I did some math on this, trying to see how much the sample capacitor can charge in 267ms. With an input impedance of 3GΩ the time constant will be 3.2*10^-12*3*10^9=0.96ms. Putting this into the RC formula 100*(1-e^(-0,26667/(0.96*10^-3)))=99.99999999991V. Unless my math is wrong, I truly can't see any scenario where the sample capacitor will not charge to full.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top