jegues
Member level 3
- Joined
- May 29, 2011
- Messages
- 58
- Helped
- 0
- Reputation
- 0
- Reaction score
- 0
- Trophy points
- 1,286
- Activity points
- 1,915
Evening gents,
I'm trying to figure out the appropriate filtering requirements for a signal at the input of the ADC for the PIC32MX795F512L.
The input signal of the ADC will be from obtained the output signal of a hall effect sensor. I've attached the datasheet of the hall effect sensor we will be for using for this particular application. Please note the 2.5V bias for a 0A signal.
The expected range for the current measured by the sensor is between 0 - 300A with 100A being the nominal target value. (Assume unidirectional current flow)
I've been reading through the PIC32 ADC reference manual, and from my research so far, the ADC total sample time is simply the addition of its acquisition time and its conversion time.
The acquisition time being,
\[T_{acq} = \text{SAMC }*T_{AD}\]
while the conversion time is,
\[T_{conv} = 12*T_{AD}\]
Where the SAMC ranges from a integer value of 1 to 32 and TAD is the period of the ADC clock cycle. (which is from the internal RC oscillator)
If we assume a worst case TAD of 6μs, the only thing left that we have control over is the integer value of the SAMC. So how does one go about deciding on an appropriate sample time?
I figured once I knew my maximum sampling frequency for the ADC I could then use Nyquists theorem to determine the appropriate cut off frequency for a LPF to prevent the input signal from aliasing the output of the ADC.
Finally, since the analog inputs on the PIC only have a safe operating range from 0 to 3.3V, the voltage signal from the hall effect sensor must also be altered and limited accordingly to prevent damage to the PIC.
My thoughts for solving the requirements above was to use something like the following. (See analog design note attached)
With this sort of arrangement I could use the two-op-amp instrumentation amplifier to remove the 2.5V bias from the hall effect sensor(by setting one of the second op-amp input to 2.5V), and set the gain accordingly to achieve 3.3V output at the maximum sensor current of 300A. Furthermore, the single supply for the op-amp A3 could be set to 3.3V, effectively clipping its output at a safe level regardless of how large of a signal swing seen at its input. Additionally, the 2nd order low-pass filter could then be tuned to my maximum sampling frequency to prevent aliasing.
What do you guys think? I've tried simulating the circuit shown in the attached design note but I've had limited success.
Perhaps once we figure out a practical maximum sampling frequency for the ADC, and focus in on the appropriate filtering and signal conditioning required we can then focus on simulating and reworking the circuit found in the analog design note.
Thanks for all the help!
I'm trying to figure out the appropriate filtering requirements for a signal at the input of the ADC for the PIC32MX795F512L.
The input signal of the ADC will be from obtained the output signal of a hall effect sensor. I've attached the datasheet of the hall effect sensor we will be for using for this particular application. Please note the 2.5V bias for a 0A signal.
The expected range for the current measured by the sensor is between 0 - 300A with 100A being the nominal target value. (Assume unidirectional current flow)
I've been reading through the PIC32 ADC reference manual, and from my research so far, the ADC total sample time is simply the addition of its acquisition time and its conversion time.
The acquisition time being,
\[T_{acq} = \text{SAMC }*T_{AD}\]
while the conversion time is,
\[T_{conv} = 12*T_{AD}\]
Where the SAMC ranges from a integer value of 1 to 32 and TAD is the period of the ADC clock cycle. (which is from the internal RC oscillator)
If we assume a worst case TAD of 6μs, the only thing left that we have control over is the integer value of the SAMC. So how does one go about deciding on an appropriate sample time?
I figured once I knew my maximum sampling frequency for the ADC I could then use Nyquists theorem to determine the appropriate cut off frequency for a LPF to prevent the input signal from aliasing the output of the ADC.
Finally, since the analog inputs on the PIC only have a safe operating range from 0 to 3.3V, the voltage signal from the hall effect sensor must also be altered and limited accordingly to prevent damage to the PIC.
My thoughts for solving the requirements above was to use something like the following. (See analog design note attached)
With this sort of arrangement I could use the two-op-amp instrumentation amplifier to remove the 2.5V bias from the hall effect sensor(by setting one of the second op-amp input to 2.5V), and set the gain accordingly to achieve 3.3V output at the maximum sensor current of 300A. Furthermore, the single supply for the op-amp A3 could be set to 3.3V, effectively clipping its output at a safe level regardless of how large of a signal swing seen at its input. Additionally, the 2nd order low-pass filter could then be tuned to my maximum sampling frequency to prevent aliasing.
What do you guys think? I've tried simulating the circuit shown in the attached design note but I've had limited success.
Perhaps once we figure out a practical maximum sampling frequency for the ADC, and focus in on the appropriate filtering and signal conditioning required we can then focus on simulating and reworking the circuit found in the analog design note.
Thanks for all the help!