James84
Newbie level 1
Hello,
According to Wikipedia (Quantization Error,) "In the typical case, the original signal is ordinarily much larger than one LSB. When this is the case, the quantization error is not significantly correlated with the signal, and has an approximately uniform distribution. "
My question is, under what condition can I assume that the quantization error becomes uniform distribution?
According to Quantization Theory by Widrow, the characteristic function or CF of input signal must be band limited and be zero for some range. But my input signal is not a random variable and there is no PDF for that.
Is there another theory or theorem for determinant input signal?
According to Wikipedia (Quantization Error,) "In the typical case, the original signal is ordinarily much larger than one LSB. When this is the case, the quantization error is not significantly correlated with the signal, and has an approximately uniform distribution. "
My question is, under what condition can I assume that the quantization error becomes uniform distribution?
According to Quantization Theory by Widrow, the characteristic function or CF of input signal must be band limited and be zero for some range. But my input signal is not a random variable and there is no PDF for that.
Is there another theory or theorem for determinant input signal?