Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

How to choose ADC resolution when measuring a DC signal?

Status
Not open for further replies.

Leo66

Newbie
Joined
Feb 27, 2023
Messages
2
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
32
I'm recently measuring a DC current signal, and planning to set a TIA (transimpedance amplifier) to convert the current into a DC voltage signal, so can be read out by an ADC. The sampling frequency of the ADC can be really fast compared with the signal bandwidth. 2MHz or higher in my design, 0.5ms time for readout, which corresponding to 1kHz signal BW, and OSR can be 1k or more. My target SNR is >70dB (only noise here, distortion is not concerned) but I can not design a 12-bit ADC because of other reasons.
It confused me about how to choose the resolution of ADC by the following two reasons:
1. Since OSR is 1k, a 7-bit (42dB) ADC at 2MHz is enough, and 70dB can be achieved by averaging filtering.
2. Filtering is based on which signal swings large enough, so quantization error can be considered random as white noise. But DC means the quantization error is fixed, so the result of averaging remains the same, will the filter still working?
The two reasons are confilct, and I'm prone to 2., averaging is futile.

BUT, But, but, TIA also carries noise, and total integrated noise from 0.5-1kHz is low enough, but 0.5-1MHz is quite large. So the voltage send to ADC is a DC signal+AC noise. will the noise from TIA makes the quantization error of ADC more random? So the situation is closer to 1., i.e. averaging can be applied here?
 

It make no sense to abstract from "distortion" respectively ADC linearity, particularly for a DC application. With OSR of 1000 you'll typically consider a SD ADC, at least a simple integrator (first order SD modulator). Using a regular 7 bit ADC will hardly provide more than 7 or 8 bit precision and linearity.
 

It make no sense to abstract from "distortion" respectively ADC linearity, particularly for a DC application. With OSR of 1000 you'll typically consider a SD ADC, at least a simple integrator (first order SD modulator). Using a regular 7 bit ADC will hardly provide more than 7 or 8 bit precision and linearity.
Really thanks, omit the distortion here because noise is of the first place temporarily. I just rethink of the noise problems, and not sure if my understanding is right.
1. For a TIA carries both signal and dc-1kHz noise (assume filter of 1kHz was set), there is no difference whether ADC samples 2kS/s or 2MS/s when the TIA's noise is dominant, because the noise is concentrate below 1kHz.
2. If we take quantization noise into consideration, the TIA noise meanwhile, the output noise performance of a 12bit 2kS/s ADC is equivalent to 7bit 2MS/s ADC within the 1kHz BW.
3. 1 and 2 works only the input varies large enough.
4. To measure a DC signal, if we use a Nyquist ADC, LSB need to be smaller than total integrated noise, otherwise, noise will not jump out of one step, so the input from TIA is of no difference from a noise free DC signal. In such case, as your reply, "a regular 7 bit ADC will hardly provide more than 7 or 8 bit precision and linearity".
5. Delta-sigma modulator with DAC feedback, can accumulates and disrupt error. Actually, I've designed a CT-DSM, but requires really large passives, so I wonder if simply a TIA+SAR works.
 

You need some residue integration (basically delta sigma) or very well controlled dithering (basically modulation with a known waveform, even then I'm not sure if it's a good solution) to make it work. The FFT of a DC signal is a unit impulse at 0Hz, it doesn't have any dBs or whatever that you can exploit with averaging 😁. Representing quantization noise in dB inherently assumes there's a wave, there isn't, it's just constant.

Just thinking out loud here, feel free to add on top of this or poke holes in it but keep in mind this is a late night "I'm writing from my bed" kind of thing, not a well thought thing. So if I really wanted to get some benefit from averaging, and if I had a perfect ADC (0 INL/DNL), I'd probably do a modulation with a triangle wave that was exactly +-0.5LSB. This way, original signal + modulating signal would always create some bit transitions and since I know that the modulating signal is a symmetric triangle wave (50% duty cycle) I could get some more information out of it. For example if I'm getting 40/60 distribution of ADC_code/ADC_code-1 after oversampling a billion times, then I know that my DC signal is 10% below ADC_code. Or if I'm getting 80/20 distribution of ADC_code/ADC_code-1, I know that my DC signal is 30% above ADC_code. I mean it's no delta sigma, there's no good shaping, but at least I'd still be sweeping some regions to get different codes.

Implementation of this would be more painful than implementing a 2-3 bit higher resolution SAR ADC unless it can somehow be integrated into the TIA. 😁
 

Hi,

with a triangle wave that was exactly +-0.5LSB.
Good idea. This technique is called "dithering" as you already mentioned.

But there is no need to be exactly +/-0.5 LSBs.

The requirements are:
* >= +/- 0.5 LSB (it could be 5 LSB amplitude, it just needs to ensure that the ADC LSB toggles.)
* DC free (not to modify the overall DC accuracy)
* equal distribution of voltage over time (triangle, sawtooth, but no sine shape) (to ensure good INL/ DNL in sub LSBs)
* best if frequency (period time) fits as integer multiple into the averaging window. (Not to generate alias frequencies)

Klaus
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top