mastet
Newbie level 1
I am working on a simulator for the 802.11a standard.
The last few weeks some channel estimation methods has been implemented and at this moment in time I am about to evaluate them. PER(packet error rate) is the performance measure to be used. By googling it seems like some people prefer to plot PER vs SNR(signal to noise ratio), while other people seem to prefer PER vs Eb/N0. Which of these two are the most common to use as a performance measure ?
I have done some PER vs SNR curves so far where SNR has been calculated as
(power of transmitted signal)/(noise power). The transmitted signal consist of: preamble+SIGNAL+data symbols. Does it seem correct to calculate the SNR as simply the power of the transmitted signal(preamble+SIGNAL+data symbols)/(noise power), or do I have to account for the fact that only part of the transmitted signal has got information bits ? I am a bit confused
:|
The last few weeks some channel estimation methods has been implemented and at this moment in time I am about to evaluate them. PER(packet error rate) is the performance measure to be used. By googling it seems like some people prefer to plot PER vs SNR(signal to noise ratio), while other people seem to prefer PER vs Eb/N0. Which of these two are the most common to use as a performance measure ?
I have done some PER vs SNR curves so far where SNR has been calculated as
(power of transmitted signal)/(noise power). The transmitted signal consist of: preamble+SIGNAL+data symbols. Does it seem correct to calculate the SNR as simply the power of the transmitted signal(preamble+SIGNAL+data symbols)/(noise power), or do I have to account for the fact that only part of the transmitted signal has got information bits ? I am a bit confused
:|