g.hadzhiyanev
Newbie level 5
Hello,
I am trying to make a model of Least Square Estimator for a GSM network in Matlab for eliminating the ISI using MLSE Detector. So far it looks that my model is right except that I don't know if my results are right. The logic of my code is the following:
I am generating a GSM Burst of 148 Bits which is divided into Data1 [a vector of 61 Bits], A training sequance [26 Bits] and Data2 [vector of 61 Bits again]. This Burst is fed into a GMSK modulator. The modulated signal is fed into a Rayleigh Channel, than a WGN is added in the end the recevied signal is demodulated.
Than the formulas for the Least Square estimators are applied h_LS = (A^T*A)^-1*A^T*receivedsignal. (A is the observation matrix assembled from a known training sequence. In some papers denoted by M or H **broken link removed**. After I run the Simulation the resulting vector for h_LS gives me the coefficients which, as far as I understood from the literature, should represent a values needed to find the delays of the different paths (the Longer the channel Memory the more paths can be recognized). Sadly I don't know whether they are right or wrong. If they are wrong i guess the reason for the faulty results are:
1. rayleighchannel model adds sth more thane just a delay and attenuation which the estimator cannot calculate
or
2. The bursts should consist of -1's and 1's not 0's and 1's but actually i couldn't generate such a random matrix.
or
3. the 3. point is mentioned in the code where the h_LS formula is applied
If not what do they actually mean? Is this code correct or not? When not where are the mistakes.
Please someone help.
Thanks.
I am trying to make a model of Least Square Estimator for a GSM network in Matlab for eliminating the ISI using MLSE Detector. So far it looks that my model is right except that I don't know if my results are right. The logic of my code is the following:
I am generating a GSM Burst of 148 Bits which is divided into Data1 [a vector of 61 Bits], A training sequance [26 Bits] and Data2 [vector of 61 Bits again]. This Burst is fed into a GMSK modulator. The modulated signal is fed into a Rayleigh Channel, than a WGN is added in the end the recevied signal is demodulated.
Than the formulas for the Least Square estimators are applied h_LS = (A^T*A)^-1*A^T*receivedsignal. (A is the observation matrix assembled from a known training sequence. In some papers denoted by M or H **broken link removed**. After I run the Simulation the resulting vector for h_LS gives me the coefficients which, as far as I understood from the literature, should represent a values needed to find the delays of the different paths (the Longer the channel Memory the more paths can be recognized). Sadly I don't know whether they are right or wrong. If they are wrong i guess the reason for the faulty results are:
1. rayleighchannel model adds sth more thane just a delay and attenuation which the estimator cannot calculate
or
2. The bursts should consist of -1's and 1's not 0's and 1's but actually i couldn't generate such a random matrix.
or
3. the 3. point is mentioned in the code where the h_LS formula is applied
If not what do they actually mean? Is this code correct or not? When not where are the mistakes.
Please someone help.
Thanks.
Code:
AWGN = comm.AWGNChannel('NoiseMethod','Signal to noise ratio (SNR)','SNR', 7);
Multipath = rayleighchan(1/100000, 0.0001, [0 1.4e-4 1e-4 1.8e-4 2.3e-4 3.5e-4], [0 0 0 0 0 0]); %Ts=2/Bandwidth (200KHz for GSM) Ds=0.0001 the object is not moving
dH = comm.GMSKDemodulator('BitOutput', true);
H = comm.GMSKModulator('BitInput', true);
hError = comm.ErrorRate('ReceiveDelay', dH.TracebackDepth);
Multipath.StoreHistory = 1;
Multipath.ResetBeforeFiltering = true;
training=randi([0 1],26,1); %Generating Training sequence
%---------------Assembling Matrix for LS Estimator--------------------%
A=zeros(21,6);
for n=1:21
A(n,:)=training(n:n+5);
end
%----------------------GSM NETWORK SIMULATION-------------------------%
for counter=1:1
%-----------------------Assembling GSM Burst--------------------------%
data1 = randi([0 1],61, 1); %Generating pre-training data
data2 = randi([0 1],61, 1); %Generating post-training data
Burstcol = [data1; training; data2 ]; %Assembling Burst as a single row matrix
%-----------------------Transmitting Signal---------------------------%
modSignal = step(H, Burstcol); %Signal through GMSK
multichannel = filter(Multipath,modSignal); %Modulated Signal through multipath
noisySignal = step(AWGN, multichannel); %Distorted through noise
received = step(dH,noisySignal); %received Signal
errorStats = step(hError, Burstcol, received); %Calculates BER
end
vector = received(61:81); %Supposedly the received training sequence %Should I choose the 77:97 Bits as there as a delay of the received data with 16 Bits caused by the modulator
hLS = inv(transpose(A)*A)*transpose(A)*vector
minError = transpose(vector-A*hLS)*(vector-A*hLS)
%-------------------------Plotting Results---------------------------%
h = scatterplot(modSignal,3,58,'kx'); hold on;
scatterplot(multichannel,2,58,'r.',h);
scatterplot(noisySignal,2,58,'g.',h);
legend('Idealy modulated Signal','Signal after Multipath');
hold off;
plot(Multipath)
fprintf('Error rate = %f\nNumber of errors = %d\n', ...
errorStats(1), errorStats(2))
Last edited: