Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[SOLVED] Correlated fading (time-invariant channel) in Matlab

Status
Not open for further replies.

KostasN

Newbie level 5
Joined
Mar 3, 2012
Messages
10
Helped
6
Reputation
12
Reaction score
6
Trophy points
1,283
Activity points
1,578
Hello to everyone,

I am currently working on a MATLAB project regarding the implementation of different adapting modulaction and coding (AMC) algorithms for a mobile TDMA communication system based on an existing Matlab simulation tool developed in cooparation with my university and a company.
This project runs about 1 year now. It has been already implemented:

  • Source coder and decoder
  • Interleaver and deinterleaver
  • Modulator and demodulator for different modulation schemes
  • Transmit and receive filters (RRC)
  • Channel model (partially)
  • Equalizer

The channel model is based on Wide Sense Stationary Uncorrelated Scattering (WSSUS) channel models. For the simulation of a more realistic channel, predefined power delay profiles are used based on measurements taken by cooperation of my university and the company.
My tasks are:

  • Implement path loss and shadowing
  • Improve channel model
  • Implement AMC algorithms
  • Evaluate their performance according to some criteria (e.g. BER vs. SNR)

I have already implemented path loss and shadowing. I also know how to implement AMC algorithms and compare them. My problem is on the improvement of the channel model. Let's be more specific:

  • The model takes into account both the delay spread and the Doppler spread. I have calculated them based on given formulas from several papers.
  • A vector named t is defined. This vector goes from 0 to Tburst with step Tc (the coherence time). Each burst is a timeslot (TS) with duration 60 ms. The TDMA frame has 3 TS - thus, it has duration 180 ms. We transmit only in the 1st TS - thus the other two TS (120 ms) are empty from data.
  • A m x n channel matrix h is defined. Remember that CIR is h(t,τ) where t the absolute time and τ the delay. Hence, the number of rows (m) corresponds to the number of channel impulse responses (CIR) and the number of columns (n) corresponds to the number of taps that have to be calculated for each CIR.
  • The number of CIR (rows of h matrix) is based on the coherence time (i.e., it is the ratio Tburst/Tc rounded to the next integer). This is done in order to take into account both cases: time-variant and time-invariant (i.e. correlated paths) channel.

Now, here is my problem: in the current implementation, the h matrix is initially defined as a m x n zero matrix. Then, there is a for loop inside whch is calculated the number of taps (τ) and then it is calculated each CIR. What I have to do is to define a max time for t vector (let's us say 100 s) and then modify the code such that the h matrix is calculated only once, then only the rows of the matrix for which the channel has different CIR for each packet (i.e. burst) are selected, and then a new matrix h' is defined from which the CIR is calculated.

For example: With 100 s, we have 100 s / 120 ms =833.33333 -> 834 rows. If the channel changes twice per burst we have to pick two rows from the whole matrix and put them in new matrix.

I don't know how to do that. Do you have any idea?
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top