Continue to Site

Welcome to

Welcome to our site! is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Bit error rate measurements using ADC samples


Junior Member level 1
May 20, 2024
Reaction score
Trophy points
Activity points
I'm trying to make a digital decoder. I sample a digitally modulated signal with the LTC2387, an 18-bit 15MHz SAR ADC. The carrier signal is at 2.1MHz, and is digitally mixed down to DC . After that,decimation is peformed. The ADC has a dynamic range of 95dBFS, and then I conduct a BERT (bit error rate test). For some reason, errors start occurring when the signal is at -106dBFS. This doesn't make any sense since I only have 95dBFS of dynamic range. I suspect that performing decimation might help improving the signal to noise ratio, but I'm not sure. I hope someone can provide an explanation
A signal at -106 dBFs is too weak and so errors are expected to get higher. As to ADC dynamic range it is 18 bits. the figure 95 dBFs is strange to me. If you mean ADC SNR of quantisation noise then it s 18*6.02 + 1.76 = 110dB. This is something else. So I believe you do not have any issue to raise.
From what I understand, quantization noise determines the theoretical SNR. However, for practical ADCs, some distortion products and quantization errors lead to a degradation of the SNR.
Yes your thoughts on ADC are correct but the issue you raised is about a signal at -106 dBFs.
This signal will occupy just 3 bits of your 18 bit ADC i.e. 15 bits are left unused.
I don't understand how this is related to the issue. Maybe you can clarify. But is this be true ?
This SNR value isn't necessarily fixed as the input signal level decreases, the ADC's performance approaches its theoretical quantization noise limit. So, the lower the input level signal, the closer the ADC reaches its full potential.
It is not about SNR of ADC. You are saying signal is -106dBFs. This is measure of signal power to full scale of bits and applies to any data bus. So can you re-pharse your issue and setup: How did you get -106dBFs? and what is meant by it
I first apply a level of 1 dB below full scale, and then I sweep the signal magnitude down, and for every sweep I make, I take a bit error measurement to determine the signal magnitude in which the bit error rate is 1%. For some reason, I got a 1% bit error rate at a signal magnitude of -106 dBFS, which doesn't make any sense since the datasheet states that the SNR is 96 dBFS.

The fullscale of the ADC is a 8V peak to peak signal.
If I understood, your question is that how come -106 dBFs is arrived at when ADC has 96dBFs.
These two parameters are distinct.

1) SNR is +96 dBFs this is signal to noise level with full scale amplitude (possibly defined for a sine wave)

2) signal power is -106 dBFs. is a measure of power relative to full scale. For real signal 0 dBFs means highest amplitude level and then it goes negative for lower amplitudes till -106dBFs in your case.
Max. Input = 8.192 VP-P = 0 dBFS
1 bit resolution =20 log(2^-18) = -108.3 dBFS
You observed
errors at -106 dBFS reported somehow but did not define how this was performed.

Signal BW, Modulation, Demodulation, Noise BW and ADC noise must be defined to make any sense of your BERT process. A good BER test set can measure the slope of dB / decade error rate (for soft errors) with some accuracy based on various types of noise and absolute signal plus all the non-linear effects such as signal group delay distortion vs BW, data asymmetry, ADC nonlinearity INL, DNL, ZSE, FSE per datasheet.

I might plot the error threshold in dBuV vs DC sweep using the +/- ZSE range.

When the error threshold from no errors to continuous errors is only 1 or 2 dB's that converts to a very high signal SNR and limited by the non-linearity of the ADC. If this is important, then I might consider adding a LNA if you want to work down to 0 dBuV. You don't need a dynamic range of 96 dB to measure BER. You need a test set that has less effective worst case noise than your signal's noise. with all the drift in DC, ZSE, etc, temperature and Vdd.

But so far it looks promising.
Last edited:

LaTeX Commands Quick-Menu:

Part and Inventory Search

Welcome to