nz_dsp
Newbie
- Joined
- Dec 13, 2013
- Messages
- 1
- Helped
- 0
- Reputation
- 0
- Reaction score
- 0
- Trophy points
- 1,281
- Activity points
- 1,301
I'm having trouble understanding how the use of wideband DDC in a real-world system results in lower data transfer requirements compared to just transferring the bulk ADC samples.
There is a well known Internet-based shortwave receiver with this design: 16-bit ADC, 77.76 MHz sample clock, 8 FPGA DDCs with decimation of 8 and 3/4, each yielding a sampled bandwidth of 3.645 MHz for a total of 8 * 3.645 = 29.16 MHz (almost the whole shortwave band). Further DDC/FFT/demod happens at the other end of a gigabit Ethernet link in the GPU card of a PC. Data from all 8 of the DDCs must be transferred continuously since there can be hundreds of simultaneous Internet listeners who could be tuning an arbitrary frequency from 0 - 29 MHz.
Now the use of DDC to reduce the sample rate for each band to make downstream processing easier makes sense (since the ultimate demod bandwidth is < 15 KHz). But the author also makes the comment that DDC is used because "the raw data stream out of the ADC running at 77.76 MHz is too much for the gigabit Ethernet". That's believable. But I just don't see how using DDC might help you with this problem:
Burst rate of gigabit Ethernet (no protocol/software overhead) = 1000 Mb/s
ADC: 16-bits @ 77.76 MHz = 1244.16 Mb/s
DDCs: assume 16-bits each of I/Q output (probably not enough bits), 3.645*8*2*16 = 933.12 Mb/s (minimum)
Even with jumbo frames, PCIe and maybe custom code to DMA packets directly into GPU memory I find it hard to believe 933 Mb/s can be sustained continuously. So what am I missing? Is there some factor of 2 with DDC that I'm overlooking? Could there be something else going on?
Thank you.
There is a well known Internet-based shortwave receiver with this design: 16-bit ADC, 77.76 MHz sample clock, 8 FPGA DDCs with decimation of 8 and 3/4, each yielding a sampled bandwidth of 3.645 MHz for a total of 8 * 3.645 = 29.16 MHz (almost the whole shortwave band). Further DDC/FFT/demod happens at the other end of a gigabit Ethernet link in the GPU card of a PC. Data from all 8 of the DDCs must be transferred continuously since there can be hundreds of simultaneous Internet listeners who could be tuning an arbitrary frequency from 0 - 29 MHz.
Now the use of DDC to reduce the sample rate for each band to make downstream processing easier makes sense (since the ultimate demod bandwidth is < 15 KHz). But the author also makes the comment that DDC is used because "the raw data stream out of the ADC running at 77.76 MHz is too much for the gigabit Ethernet". That's believable. But I just don't see how using DDC might help you with this problem:
Burst rate of gigabit Ethernet (no protocol/software overhead) = 1000 Mb/s
ADC: 16-bits @ 77.76 MHz = 1244.16 Mb/s
DDCs: assume 16-bits each of I/Q output (probably not enough bits), 3.645*8*2*16 = 933.12 Mb/s (minimum)
Even with jumbo frames, PCIe and maybe custom code to DMA packets directly into GPU memory I find it hard to believe 933 Mb/s can be sustained continuously. So what am I missing? Is there some factor of 2 with DDC that I'm overlooking? Could there be something else going on?
Thank you.