Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Decoding in Bit Interleaved Coded Modulation (BICM)

Status
Not open for further replies.

ledlong

Newbie level 2
Joined
Jun 3, 2016
Messages
2
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
23
Dear Everyone,

I am currently working with BICM (Bit Interleaved Coded Modulation) model and having a question related to decoding in BICM. I hope that you may give me several suggestions if you do not mind.

I consider a scenario as follow:

**broken link removed**

in which:

At transmitter:
  • Encoder uses turbo code in order to encode the information bit sequence to coded bit sequence c
  • The coded bit is passed through the Interleaver block
  • The interleaved-coded bit sequence is mapped into symbols x in the modulator block. It is assumed that 16-QAM is employed.

Channel: Memoryless channel with additive white Gaussian noise with zero mean and variance No

Receiver: Decoding is performed during two stages:

1. Received signal y is soft-demapped by using MAP (Maximum A Posteriori) algorithm. The output is the log-likelihood ratio (LLR) for each coded bit corresponding to c_hat. LLR is then de-interleaved at the de-interleaver

2. MAP algorithm again is used to decode turbo code.

My question is:

In the first stage, the demapping is easily performed by using MAP algorithm (noise variance is known No) but not for the second stage because the input of this stage is the soft output of the first stage. How we can tackle with the second stage using MAP algorithm ?

Many thanks in advance,
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top