Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

DAC and ADC resolution requirement in IEEE802.11ad system

Status
Not open for further replies.

y19085

Newbie level 1
Newbie level 1
Joined
May 9, 2013
Messages
1
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,281
Visit site
Activity points
1,290
In designing analog front end of IEEE802.11ad 60GHz system, we have to meet some standards in “Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 3: Enhancements for Very High Throughput in the 60 GHz Band” which was published on 28 December 2012. Those standards including spectral mask, transmitter EVM, etc. . From my understanding, I can probably decide the DAC resolution from the required transmitter EVM, but how can I decide the ADC resolution in the receiver in IEEE802.11ad system? The standard only lists the minimum sensitivity for different PHYs.
How can I decide the required ADC/DAC resolution in this system?
Thanks for any answers.
 

In designing analog front end of IEEE802.11ad 60GHz system, we have to meet some standards in “Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 3: Enhancements for Very High Throughput in the 60 GHz Band” which was published on 28 December 2012. Those standards including spectral mask, transmitter EVM, etc. . From my understanding, I can probably decide the DAC resolution from the required transmitter EVM, but how can I decide the ADC resolution in the receiver in IEEE802.11ad system? The standard only lists the minimum sensitivity for different PHYs.
How can I decide the required ADC/DAC resolution in this system?
Thanks for any answers.

There is no easy answer to this question.

From a cost perspective, you want to minimize the bit precision, especially in the ADC.

To really determine what precision you can live with, you have to run Monte Carlo simulations in 60 GHz channel models assuming dynamic antenna configurations of your (known) hardware and the (unknown) other node hardware. You have to ensure that sensitivity requirements are met at the receiver and that the EVM requirements are met at the transmitter. Keep in mind that there are other sources of degradation including phase noise and power amplifier nonlinearity that you must account for. Note that there are several parameters that the ADC can tune, including the bit precision, the peak-to-peak voltage swing, and the potential nonuniform spacing of quantization levels within the voltage swing.

You must also consider that the practical performance of your AGC setup in fading channels will necessitate providing some margin in your bit precision.

There's also the fact that different PHYs need different bit precision. The Tensorcom folks, for example, have used the low-precision advantage of the single carrier PHY as a marketing tool.

Honestly the topic itself could be a PhD dissertation if you really evaluated it extensively (and could find someone interested enough to fund it).

This paper has a section about the topic, you can see the references to get more info: **broken link removed**
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top