Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Sensor to ADC interface

Status
Not open for further replies.

a8811323

Newbie level 6
Joined
Nov 17, 2014
Messages
11
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
122
Hi, guys, I have a question about the interface between sensor and ADC.
I have a power detector that produces voltage levels ranging from 0.26V to 1.1V.
My selected ADC input voltage range is 0-3.3V.

My question is that is that really necessary to build a interface using op-amp to map 0.26V to 0V and 1.1V to 3.3V accordingly?
Since the range of ADC covers the output voltage of detector, so I am thinking, theoretically, there is no need to build an interface, which is a waste of time.
 

An interface circuit serves two purposes. #1. It can shift the level and range to take full advantage of the ADC range. #2. It can provide a low-impedance drive to the ADC.

As for #1, the range of 0.26 to 1.1 takes about 1/3 the range of your ADC input range. This means you will lose between 1 and 2 bits of resolution. So if it was a 12-bit ADC, you now have a 10-bit ADC. Depending on the application, that may be perfectly fine.

As for #2, this is mostly for high-impedance sensors. ADCs often require a driving impedance of 10K or less. If your power detector output has less than 10K or whatever your ADC spec requires, then an impedance-lowering interface is not necessary.

In summary, it is quite possible that for your application, no special interface circuitry is needed.

I just remembered one other function. If the power detector can ever produce an output when the ADC circuit is powered down, that signal may do damage to the ADC. An interface circuit can prevent that if it is a problem. Again, probably not a problem in your case.
 

Hi,
I have understood it clearly. However, one more questions. In my case, the output voltage range takes only about 1/3 range of my ADC input range. So, as you said, probably only 10-bit is enough. My question is that the input range does not change still from 0 to 3.3V but the using resolution of 10bits (that is 3.22mv/bit) instead of (0.81mv/bit), it is simply making it less accurate. So what is the purpose of using less bits resolution.
In other words, why it is better to use full advantage of the ADC range?
Appreciated that
 

The resolution remains 3.22 mV/step however the available range gets reduced.
Unless the application so demands one can go ahead with this.
 

Hi,

You didn´t tell us your ADC resolution.

Instead of amplifying input voltage you could decrease ADC reference voltage.
Maybe use a 1.25V shunt reference. I expect an improvement in overall performance:
* Better resolution
* Better REF stability and precision. 3.3V sounds like you currently use VCC as reference, I don´t recommend this.

****
Tunelabguy gave good hints.
I agree that using only 0.26V..1.1V of the total 0V...3.3V often is a simple and useful way.

I often see the aim for high resolution. 10 bits, 12 bits, 16 bits.... but the same time they use VCC as reference, which kills the precision down to 6 bits.
Because VCC may be noisy, drift with load current, time and temperature: --> From a 10 bit ADC result you can see the 6 MSBs as reliable and the 4LSBs as unreliable.

Klaus
 

..So what is the purpose of using less bits resolution.
In other words, why it is better to use full advantage of the ADC range?..
Resolution is a theoretical limitation on accuracy. But as Klaus pointed out, your accuracy is often more limited by other things, such as the accuracy of your reference voltage, or the presence of noise. In that case the theoretical limitation to accuracy caused by having less resolution is inconsequential. Having more resolution where other more significant inaccuracies exist gives no advantage at all.

In many cases having a signal that normally uses only a part of the ADC range can be an advantage in easily detecting exceptional over-ranging conditions. It is sometimes vital to know when a signal is outside of its normal range in fault detection. I sometimes have used a bipolar ADC to read a normally unipolar sensor, thus throwing away one bit of resolution, just to know if the sensor is somehow putting out a negative voltage. (That was with a commercial ADC board without the option of slightly shifting the signal offset.)
 
Last edited:

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top