Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

24bit ADC effective number of bits

Status
Not open for further replies.

asimov_18

Member level 2
Joined
Sep 25, 2006
Messages
45
Helped
2
Reputation
4
Reaction score
2
Trophy points
1,288
Location
London---UK
Activity points
1,816
Hi,
I am making a circuit for reading a sensor that needs resolution of 100-500nV. I thought about using 24 bit ADCs available.
Plenty of them are out there. if using a 1V reference a 24 bit ADC can resolve the given range easily at least in theory. Now comes the question most ADC you see mention the peak to peak noise which is at the best 5-10 Microvolts, if I understand it correctly the best one can measure is limited by noise voltage, in such a scenario what is the point of having a 24bit ADC? The noise voltage will make you loose the resolution!! Is it just a marketing stunt? Or is there some thing which I am clearly missing?
 

I am making a circuit for reading a sensor that needs resolution of 100-500nV
Golly !
Just thermal noise is going to kill you, even if you ignore external induced noise pickup.

But if you really have to do this, use a lower resolution ADC with a much higher reference voltage, and a very low noise switched gain preamplifier in front of it, with auto ranging.

That should enable you to at least get some decent amplitude into the ADC.

One further thought. Anything you can do to limit the analog bandwidth is going to reduce the noise power.
 
Last edited:

What you are missing is signal conditioning. Also even 24 bit ADCs don't have 24 bit noise specs, so their outputs fluctuate, they're 24 bits because they're usually delta sigma and it's easy to give out 24 bits and it's sometimes beneficial in digital processing that comes afterwards.

It's not always a good idea to give signal close to LSB of a data converter, because that signal is going to get lost anyway. What is usually done is that they amplify the small signal using a dedicated super low noise amplifier, so that the data converter noise won't be an issue.

Generally resolution is needed if you need more steps in data conversion, it's not for sensing very small signals. This sounds stupid of course, but what I mean is, if you are interested in having 2^24 steps in between 100-500 nV use 24 bit ADC, not because 24 bit ADC's LSB is in nV range. Your input signal must be amplified anyways.

Edit: Wait, I'm sorry, I misunderstood your question. You actually want 100-500 nV resolution, I thought you wanted to measure a signal with 100-500 nV amplitude. Sorry about that. But still signal conditioning is the answer. Zoom into the range you are interested in, amplify, send it to data converter. There is no real 24 bit data converter, and even the most stupid things will add noise more than 100-500 nV, so you need to modify signal a bit anyways.
 
Yes definitely signal conditioning, and some extra low noise gain.

Then you can average many samples in software (with a revolving stack), and synthesise the extra resolution.

By that I mean, if something measures between say 49 and 50 and you have 1,000 samples some are 48, some 49, some 50 and some 51 (for example).
You add the 1000 samples together, then divide by 1000 and get a 49.253 statistical average.

The resolution goes way deeper than you actually need to directly measure.
This works as long as any noise is random, or at least non coherent, which it usually is.

This is a reliable method, and it averages out the noise, but its slow to respond to a step change.
It generally works better with lower bit resolution and and faster sampling.
Once you go beyond a certain bit resolution, all the least significant bits are just random noise, and worthless.

Find out the practical limit for measurement resolution, then go for maximum sampling speed and do the rest in software.
 
Last edited:

Hi,

I just finished a mains metering device: U_RMS, I_RMS, P_true with a 24 delta sigma ADC without external amplifier.
For current measurement I used a 2mOhms shunt with current range of +/-50A (+/-100mV ADC input range)
The values are calculated every 20ms from the values of the last 20ms. (No heavy filtering: 0% to 100% within 30ms)
The RMS current as well as the DC offset is stable down to +/-2mV. Means 4uV at the ADC input.

*******
But in most cases an amplifier is a benefit.

Be sure that the input referred errors of the amplifier circuit is better than the ADC input referred errors.
(Noise, distortion, gain drift, offset drift....)

Especially high resolution delta sigma ADC have good performance. Read datasheets.

******
It generally works better with lower bit resolution and and faster sampling.
With low resolution ADCs the input referred noise often is less than one LSB...then the averaging method doesn't work properly for DC voltages.
In this case one needs to add dither to the analog input signal. Triangle, sawtooth or any other equally spread voltage. No sine.
 
Klaus is quite right, you do actually need some noise with which to average out, or else the basic method breaks down,
How much noise is debatable, which is why I think jumping in and testing some real world circuitry is the way to go initially, rather than pencil and paper, or simulation.

Once you have your last three or four bits jumping around merrily with noise, that is about as far as you can probably go with bit resolution. From there you should concentrate on increasing sampling speed. and finding a good software compromise between max final resolution and response time for step change.

The whole thing becomes an exercise in iterative development moving towards the best final result.
 
It comes without saying, that to achieve that low level of noise, the board layout is as critical as the actual components.
Where one places and orients the components, how the signals are routed, how power and ground distributed....they make a difference from night to day.
 
Last edited:
Once you have your last three or four bits jumping around merrily with noise, that is about as far as you can probably go with bit resolution. From there you should concentrate on increasing sampling speed. and finding a good software compromise between max final resolution and response time for step change.

The system works even if there is no noise in theory; you keep on adding the signal (say 1024 times) and final signal builds up to a measurable level. If there is noise, the noise level goes down (if it is truly random then as root n, where n is the number of additions). This principle is applicable only for repetitive signals.

Often the noise is far greater than the signal and each ADC value is, say, 6-7 bits (mostly contributed by noise). During the additive process, the noise goes down and the signal builds up. If, say, the noise contributes 8-10 bits and the signal 1-2 bits, we need to add 512 (or so) times so that the signal can be seen clearly.
 

If there is absolutely no noise, and no bit uncertainty, every single reading will be identical. Averaging a million samples would just come back to the original unvarying number, so you really would gain nothing.

There must either be some dither added, or white noise added to give some random variation so that values BETWEEN the original bit thresholds can be resolved.
The more samples averaged, the finer the possible resolution after averaging.
That is where an increased sampling rate pays off. You can achieve higher resolution without paying the penalty of losing speed of response.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top