For a rate 1/2 binary convolutional code system, the demodulator delivers two code symbols at a time to the decoder. For hard-decision (2-level) decoding, each pair of received code symbols can be depicted on a plane, as one of the corners of a square, as shown in Figure 7.22a. The corners are labeled with the binary numbers (0,0), (0,1), (1,0), and (1,1), representing the four possible hard-decision values that the two code symbols might have. For 8-level soft-decision decoding, each pair of code symbols can be similarly represented on an equally spaced 8-level by 8-level plane, as a point from the set of 64 points shown in Figure 7.22b. In this soft-decision case, the demodulator no longer delivers firm decisions; it delivers quantized noisy signals (soft decisions). The primary difference between hard-decision and soft-decision Viterbi decoding, is that the soft-decision algorithm cannot use a Hamming distance metric because of its limited resolution. A distance metric with the needed resolution is Euclidean distance, and to facilitate its use, the binary numbers of 1 and 0 are transformed to the octal numbers 7 and 0, respectively. This can be seen in Figure 7.22c, where the corners of the square have been re-labeled accordingly; this allows us to use a pair of integers, each in the range of 0 to 7, for describing any point in the 64-point set. Also shown in Figure 7.22c is the point 5,4, representing anexample of a pair of noisy code-symbol values that might stem from a demodulator. Imagine that the square in Figure 7.22c has coordinates x and y.
Figure 7.22 (a) Hard-decision plane (b) 8-level by 8-level soft-decision plane (c) Example of soft code symbols (d) Encoding trellis section (e) Decoding trellis section.
Then, what is the Euclidean distance between the noisy point 5,4 and the noiseless point 0,0? It is
Similarly, if we ask what is the Euclidean distance between the noisy point 5,4 and the noiseless point 7,7?
It is
Soft-decision Viterbi decoding, for the most part, proceeds in the same way as hard-decision decoding (as described in Sections 7.3.4 and 7.3.5). The only difference is that Hamming distances are not used. Consider how soft-decision decoding is performed with the use of Euclidean distances. Figure 7.22d shows the first section of an encoding trellis, originally presented in Figure 7.7, with the branch words transformed from binary to octal. Suppose that a pair of soft-decision code symbols with values 5,4 arrives at a decoder during the first transition interval. Figure 7.22e shows the first section of a decoding trellis. The metric
representing the Euclidean distance between the arriving 5,4 and the 0,0 branch word, is placed on the solid line. Similarly, the metric
representing the Euclidean distance between the arriving 5,4 and the 7,7 code symbols, is placed on the dashed line. The rest of the task, pruning the trellis in search of a common stem, proceeds in the same way as hard decision decoding. Note that in a real convolutional decoding chip, the Euclidean distance is not actually used for a soft-decision metric; instead, a monotonic metric that has similar properties and is easier to implement is used. An example of such a metric is the Euclidean distance-squared, in which case the square-root operation shown above is eliminated. Further, if the binary code symbols are represented with bipolar values, then the inner-product metric in Equation (7.9) can be used. With such a metric, we would seek maximum correlation rather than minimum distance.
---------- Post added at 22:07 ---------- Previous post was at 21:24 ----------
For example you are using BPSK modulation, and the bits after decoder is between [-1,1] i.e -0.593 -0.29 etc.
May be you had in mind "the bits after
demodulator is between [-1,1] i.e -0.593 -0.29 etc."
When hard decision is implemented, the output of the demodulator is quantized by two levels, 0 or 1, and fed into decoder. That is the demodulator makes threshold decision about meanings -0.592 or -0.29 etc., and then feeds the decoder by it's decision.
When soft decision is implemented, it sends the decoder n-bit word (depending on quantize level) what is equivalent to sending the decoder a measure of confidence along with the code symbol decision.