In Part I of this blog, we saw that we can construct two partially correlated signals, x_{1}(t) and x

_{2}(t), as:

\[

\; \; \; \; \; \; \; \; \;\; \;\; \; x_1(t) = \sqrt{P_1}s_1(t) \\

\; \; \; \; \; \; \; \; \;\; \;\; \; x_2(t) = \sqrt{P_2}ρ∗s1(t)+√(1−|ρ|2)s2(t)ρ∗s1(t)+(1−|ρ|2)s2(t)

\]

where P

_{1}and P

_{2}are the desired signal powers and

*ρ*is their desired cross-correlation. s

_{1}(t) and s

_{2}(t) are uncorrelated signals with zero mean and unit power.

We will now generalise this two-signal concept to N signals.

**Generalising to N Correlated Signals**

By inspecting our expression for x

_{2}(t), we can see that (loosely speaking) it has been designed in two parts: the s

_{1}(t) part which correlates with x

_{1}(t) and the s

_{2}(t) part which is uncorrelated with x

_{1}(t).

So, if we wanted to design a third signal, x

_{3}(t), then we would need a third uncorrelated signal, s

_{3}(t), in order to freely choose correlations with x

_{1}(t) and x

_{2}(t). More generally, we expect the

*n*

^{th}signal, x

_{n}(t), to be a function of

*n*uncorrelated signals, s

_{1}(t), s

_{2}(t), ... , s

_{n}(t). We can write this idea in matrix form:

\[

\left[

\begin{array}{c}

x_{1}(t) \\

x_{2}(t) \\

x_{3}(t) \\

\vdots \\

x_{N}(t)

\end{array}

\right] =\left[

\begin{array}{ccccc}

a_{1,1} & & & & 0 \\

a_{2,1} & a_{2,2} & & & \\

a_{3,1} & a_{3,2} & \ddots & & \\

\vdots & \vdots & \ddots & \ddots & \\

a_{N,1} & a_{N,2} & \cdots & a_{N,(N-1)} & a_{N,N}

\end{array}

\right] \left[

\begin{array}{c}

s_{1}(t) \\

s_{2}(t) \\

s_{3}(t) \\

\vdots \\

s_{N}(t)

\end{array}

\right]

\]

\left[

\begin{array}{c}

x_{1}(t) \\

x_{2}(t) \\

x_{3}(t) \\

\vdots \\

x_{N}(t)

\end{array}

\right] =\left[

\begin{array}{ccccc}

a_{1,1} & & & & 0 \\

a_{2,1} & a_{2,2} & & & \\

a_{3,1} & a_{3,2} & \ddots & & \\

\vdots & \vdots & \ddots & \ddots & \\

a_{N,1} & a_{N,2} & \cdots & a_{N,(N-1)} & a_{N,N}

\end{array}

\right] \left[

\begin{array}{c}

s_{1}(t) \\

s_{2}(t) \\

s_{3}(t) \\

\vdots \\

s_{N}(t)

\end{array}

\right]

\]

or, defining a variable for each bracketed term:

\[

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \underline{x}(t) =\mathbf{A}\underline{s}(t) \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;(17)

\]

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \underline{x}(t) =\mathbf{A}\underline{s}(t) \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;(17)

\]

where

**A**is a lower triangular matrix.

So, for example, in the 2-signal case (i.e. N = 2), we showed in Part I of this blog that:

\[

\left[

\begin{array}{c}

x_{1}(t) \\

x_{2}(t)

\end{array}

\right]

=\left[

\begin{array}{cc}

\sqrt{P_{1}} & 0 \\

\sqrt{P_{2}}\rho^* \; & \sqrt{P_{2}(1-|\rho| ^{2})}

\end{array}

\right] \left[

\begin{array}{c}

s_{1}(t) \\

s_{2}(t)

\end{array}

\right]

\]

\left[

\begin{array}{c}

x_{1}(t) \\

x_{2}(t)

\end{array}

\right]

=\left[

\begin{array}{cc}

\sqrt{P_{1}} & 0 \\

\sqrt{P_{2}}\rho^* \; & \sqrt{P_{2}(1-|\rho| ^{2})}

\end{array}

\right] \left[

\begin{array}{c}

s_{1}(t) \\

s_{2}(t)

\end{array}

\right]

\]

But how do we find the values of the lower triangular matrix,

**A**, in general? The answer is the

**Cholesky decomposition of the desired theoretical covariance matrix**.

To clarify, let's say we want our signals to have covariance matrix

**R**

_{xx}:

\[

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;\mathbf{R}_{xx} = \mathcal{E}\{\underline{x}(t)\underline{x}^H(t)\} \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;(18)

\]

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;\mathbf{R}_{xx} = \mathcal{E}\{\underline{x}(t)\underline{x}^H(t)\} \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;(18)

\]

where (.)

^{H}= ((.)

^{*})

^{T}denotes the conjugate transpose. Then, since all covariance matrices are positive semidefinite, there must exist a lower triangular matrix,

**L**, such that:

\[

\mathbf{R}_{xx} = \mathbf{L}\mathbf{L}^H

\]

\mathbf{R}_{xx} = \mathbf{L}\mathbf{L}^H

\]

where

**L**is a lower triangular matrix. We can rewrite this as:

\[

\mathbf{R}_{xx} = \mathbf{L}\mathbf{I}\mathbf{L}^H\\

\; \; \; \; \; \; = \mathbf{L}\mathcal{E}\{\underline{s}(t)\underline{s}^H(t)\}\mathbf{L}^H

\]

\mathbf{R}_{xx} = \mathbf{L}\mathbf{I}\mathbf{L}^H\\

\; \; \; \; \; \; = \mathbf{L}\mathcal{E}\{\underline{s}(t)\underline{s}^H(t)\}\mathbf{L}^H

\]

where

**I**is the (N x N) identity matrix. So, by comparing this to Equation (18), we can clearly define

**A**=

**L**in Equation (17), so that:

\[

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \underline{x}(t) =\mathbf{L}\underline{s}(t) \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;(19)

\]

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \underline{x}(t) =\mathbf{L}\underline{s}(t) \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;(19)

\]

There are several methods for obtaining

**L**in practice. I will first give a full Matlab example, before describing how to compute the Cholesky decomposition of a complex matrix using any programming language you want.

**Matlab Example**

Code Matlab M - [expand] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 close all; clear all; clc; % Number of samples to generate Nsamps = 100000; % Number of signals N = 4; % Set the required second order statistics P1 = 2.3; P2 = 0.75; P3 = 3.4; P4 = 1.23; rho12 = 0.2 - 1j*0.3; rho13 = -0.6 + 1j*0.1; rho14 = -1j*0.4; rho23 = 0.1 + 1j*0.1; rho24 = 0.5; rho34 = -0.3 - 1j*0.1; % 4x4 (theoretical) covariance matrix Rxx = [P1, sqrt(P1*P2)*rho12, sqrt(P1*P3)*rho13, sqrt(P1*P4)*rho14; sqrt(P2*P1)*conj(rho12), P2, sqrt(P2*P3)*rho23, sqrt(P2*P4)*rho24; sqrt(P3*P1)*conj(rho13), sqrt(P3*P2)*conj(rho23), P3, sqrt(P3*P4)*rho34; sqrt(P4*P1)*conj(rho14), sqrt(P4*P2)*conj(rho24), sqrt(P4*P3)*conj(rho34), P4] % Generate zero-mean, unit-power, uncorrelated Gaussian random signals s = randn(N, Nsamps); % Create partially correlated signals L = chol(Rxx,'lower'); X = L*randn(N,Nsamps); % 4x4 (practical) sample covariance matrix Rxx_samp = X*X' / Nsamps

The above code will produce an output similar to the following:

Code:

```
Rxx =
2.3000 + 0.0000i 0.2627 - 0.3940i -1.6779 + 0.2796i 0.0000 - 0.6728i
0.2627 + 0.3940i 0.7500 + 0.0000i 0.1597 + 0.1597i 0.4802 + 0.0000i
-1.6779 - 0.2796i 0.1597 - 0.1597i 3.4000 + 0.0000i -0.6135 - 0.2045i
0.0000 + 0.6728i 0.4802 + 0.0000i -0.6135 + 0.2045i 1.2300 + 0.0000i
Rxx_samp =
2.2921 + 0.0000i 0.2602 - 0.3927i -1.6626 + 0.2777i -0.0055 - 0.6739i
0.2602 + 0.3927i 0.7513 + 0.0000i 0.1649 + 0.1632i 0.4805 - 0.0006i
-1.6626 - 0.2777i 0.1649 - 0.1632i 3.3858 + 0.0000i -0.6106 - 0.2066i
-0.0055 + 0.6739i 0.4805 + 0.0006i -0.6106 + 0.2066i 1.2347 + 0.0000i
```

As in the 2-signal case, note that we never expect the sample covariance matrix to exactly match the theoretical covariance matrix. Sample statistics are themselves random variables, having their own probability distributions.

**Other Programming Languages**

Free source code is provided in 37 languages for computing the Cholesky decomposition of real matrices at this Rosetta Code page: [link].

**Please note**that many of these implementations need a very minor modification to work with complex-valued matrices.

Specifically, one term needs conjugating. Using the C code example as pseudocode, we need to conceptually modify the following:

Code C++ - [expand] 1 2 3 4 // Replace this operation... s += L[i * n + k] * L[j * n + k]; // ... with this operation s += L[i * n + k] * conj(L[j * n + k]);