Here is a 3-bit A/D.
There are 8 different voltage steps including zero and 1.
Its resolution is 8, not 7.
I believe Vref / 1024 is correct.
If you examine my initial reply, it makes no counter assertions as the resolution of a 10-bit ADC and in fact I also point out there are 1024 unique values ranging from 0 to 1023.
And I should also add the choice of dividing by 1023 rather 1024 is less due to the devices resolution then the apparent choice of coding scheme and point which follows.
Nice graph, unfortunately in this context, it incorrectly suggests the ADC output of 0b111 represents Full Scale or Vref, which is clearly NOT the case.
The output of an ADC can be coded using one of several widely used ADC coding schemes, some are more applicable to bipolar, than unipolar ADCs.
They all have their advantages and disadvantages and the choice largely depends on the application requirements, quantization characteristics, output range, minimization of various sources of error, etc.
In this case the chosen coding scheme, which is the simplest and commonly referred to as
natural or
straight binary coding.
For a unipolar ADC utilizing
straight binary coding, the maximum return value is not Full Scale (FS) or Vref, it is instead Vref minus the Least Significant bit (LSb).
In the case of your 3-bit ADC with a Vref = 5v, the maximum value analog value the output from the ADC can represent is 4.375v, not the 5v Full Scale of Vref.
The reason for is, of course, the largest value returned by the ADC is 0b0111 or 7, which represents a
quantization uncertainty or error of 1 LSb which equates to 0.625v in this case.
The following formula/algorithm, where ADo represents the output of the ADC, has a range of 0v to Vref - LSb, or if Vref =5v, a range of 0v to 4.375v.
Or in the case of 3-bit ADC:
The following are more accurate graphics depicting the output limitations of straight binary coding:
Another example of a 4-bit ADC and the output limitations of straight binary coding in table format:
If the full scale output range of 0v to Vref is required, which is often the case in may application such as a ohmmeter then some addition software or hardware techniques must be utilized to achieve this output range. These techniques can be observed in real devices as a Tare/Clear or Calibration features. An ADC coding scheme which produces results over the full range should be beneficial in this application as it would allow a zero ohm reading to be obtained when the probes or test points are shorted, as similar results would be expected from an ohmmeter.
One of the simplest techniques to achieve full scale output is the following formula/algorithm:
Ain = (Vref / (2^n -1)) * ADo
Or in the case of 3-bit ADC:
Or in the case of 10-bit ADC:
Ain = (Vref / 1023) * ADo
Admittedly the use of the phrase, "correct value" in my initial reply, was a poor choice as the implementation of the technique is not appropriate in all cases.
The technique does not effect the ADC device resolution, however it does have the effect of scaling the output over the entire range of 0v to Vref. The technique does slightly increase
quantization uncertainty/error and introduce some small amount of scaling error, however these errors are far less than the 1% accuracy of a precision resistor.
Additional information concerning ADC/DAC coding schemes is available in the following document:
FUNDAMENTALS OF SAMPLED DATA SYSTEMS - ANALOG-DIGITAL CONVERSION
Of course there are other techniques which can be implement in either hardware or software which provides coding output over Full Scale, however most are considerably more elaborate.
On a side note, for ADC or DSP intensive applications, typically the more useful coding scheme of fractional binary coding is utilized, which is always normalized to Full Scale (FS). However, this coding scheme does require considerably more arithmetic operations of a more elaborate ADC hardware module.
I hope I've cleared any confusion as to the motive of my suggestion.
BigDog