*: [link].*

__: This blog is split into two parts because edaboard limits the number of pictures and equations per blog. The second part can be found here__**Note****Introduction**

It is often useful to be able to generate (real or complex-valued) signals with specific

**cross-correlations**. In the simplest case, this means we want to produce two signals with a specified cross-correlation (and specified powers). Or, in the general case, it means we want to specify an (N x N)

*covariance matrix*and generate N signals which will produce this covariance matrix.

As a particularly useful example of a type of "signal", we will consider Gaussian (a.k.a. Normally distributed) random variables. However, the ideas presented here may be applied for different types of signals.

In this blog, I will explain in detail how to generate partially correlated signals using common mathematical operations. I will show that discrete correlated signals are

__extremely easy to generate in software__, using the

**Cholesky Decomposition**(for which free source code is provided in 37 programming languages at this Rosetta Code page). I will show that in Matlab (or GNU Octave) we basically only need these

**two lines of code**:

where Rxx is the (N x N) desired covariance matrix, N is the number of signals and Nsamps is the number of discrete samples to generate.

I will provide full Matlab source codes for the ideas in this blog. I will also describe the ideas in detail, so you can implement them in any way you want.

If you're not interested in the introductory discussion and derivations for the simple 2-signal case, you can skip straight to Part II of this blog, which contains the main results.

**The Simple 2-Signal Case**

It is easiest to start by discussing the simple 2-signal case, before generalising to N signals. Let us denote the signals we are designing as x

_{1}(t) and x

_{2}(t) and assume them to be complex-valued (since it is trivial to then convert the result for real-valued signals if required).To keep the discussion simple, we will assume we are interested in zero-mean signals:

\[\text{Mean} \; \text{of} \; \text{x_1(t):} \; \; \; \;\mu_1 = \mathcal{E}\{x_1(t)\} = 0\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;(1)\\

\text{Mean} \; \text{of} \; \text{x_2(t):} \; \; \; \;\mu_2 = \mathcal{E}\{x_2(t)\} = 0\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;(2)\]

where curly E{...} denotes the expectation operator. So, our task is to design x

_{1}(t) and x

_{2}(t) to have the following second order statistics:

\[\text{Power} \; \text{of} \; \text{x_1(t):} \; \; \; \;P_1 = \mathcal{E}\{x_1(t)x_1^*(t)\}\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;(3)\\

\text{Power} \; \text{of} \; \text{x_2(t):} \; \; \; \;P_2 = \mathcal{E}\{x_2(t)x_2^*(t)\}\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \; \;(4)\\

\text{Cross-correlation:} \; \; \; \;\rho = \frac{\mathcal{E}\{x_1(t)x_2^*(t)\}}{\sqrt{P_1P_2}}\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \; \;(5)\]

where * denotes the complex conjugate.

**: the above expression for**

__NOTE__*ρ*is simplified because we are designing zero-mean signals. If the means are non-zero, they must be subtracted as shown here: [link].

Now, although it is not always easy to obtain partially correlated signals, there are many ways of producing

**uncorrelated**signals (i.e. where the cross-correlation is zero). Therefore, let's start by considering two zero-mean, unit-power, Normally distributed complex random variables, s

_{1}(t) and s

_{2}(t), which are uncorrelated such that:

\[\text{Power} \; \text{of} \; \text{s_1(t):} \; \; \; \;\mathcal{E}\{s_1(t)s_1^*(t)\}=1\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \; \; \; \;(6)\\

\text{Power} \; \text{of} \; \text{s_2(t):} \; \; \; \;\mathcal{E}\{s_2(t)s_2^*(t)\}=1\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \; \; \;(7)\\

\text{Cross-correlation:} \; \; \; \;\mathcal{E}\{s_1(t)s_2^*(t)\}=0\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \;(8)\]

\text{Power} \; \text{of} \; \text{s_2(t):} \; \; \; \;\mathcal{E}\{s_2(t)s_2^*(t)\}=1\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \; \; \;(7)\\

\text{Cross-correlation:} \; \; \; \;\mathcal{E}\{s_1(t)s_2^*(t)\}=0\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \;(8)\]

Or, in Matlab (GNU Octave) code, we simply write:

A useful property of Normally distributed random variables is that when we add them together, the result is also Normally distributed. Or, as more correctly described in this article [link]:

Therefore, let's choose x

_{1}(t) to be:

\[ \; \; \; \; \; \; \; \; \; \; \; \;\; \; \; \;x_1(t)=\sqrt{P_1}s_1(t)\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; (9)\]

since this satisfies Equation (3), because:

\[\mathcal{E}\{x_1(t)x_1^*(t)\} = \mathcal{E}\{\sqrt{P_1}s_1(t)\sqrt{P_1}s^*_1(t)\}\\

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;=P_1\mathcal{E}\{s_1(t)s^*_1(t)\}\\

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;= P_1\; \; \; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \; \; \; \; \;\; \; \; \;(10)\]

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;=P_1\mathcal{E}\{s_1(t)s^*_1(t)\}\\

\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;= P_1\; \; \; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \; \; \; \; \;\; \; \; \;(10)\]

Then, we just have to design an x

_{2}(t) which will satisfy Equations (4) and (5). Recalling (from the quote above) that linear combinations of Normally distributed random variables are also Normally distributed, let us consider constructing x

_{2}(t) as follows:

\[ \; \; \; \; \; \;x_2(t)=A_1s_1(t)+A_2s_2(t) \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;(11)\]

Then, we need to find (complex) values for A

_{1}and A

_{2}which satisfy our remaining requirements (Equations 4 and 5):

\[

P_2 = \mathcal{E}\{x_2(t)x_2^*(t)\}\\

\; \; \; \;=\mathcal{E}\{(A_1s_1(t)+A_2s_2(t))(A_1^*s_1^*(t)+A_2^*s_2^*(t))\}\\

\; \; \; \;=|A_1|^2\mathcal{E}\{s_1(t)s_1^*(t)\} + 0 + 0 +|A_2|^2\mathcal{E}\{s_2(t)s_2^*(t)\}\\

\; \; \; \;= |A_1|^2+|A_2|^2\; \; \; \;\; \; \; \;\; \; \; \; \; \;\; \; \;\; \; \;\; \; \;\; \; \;\; \; \;\; \; \;\; \; \;\; \; \;\; \; \;\; \; \;(12)\\[30pt]

\rho = \frac{\mathcal{E}\{x_1(t)x_2^*(t)\}}{\sqrt{P_1P_2}}\\

\; \;=\frac{\mathcal{E}\{(\sqrt{P_1}s_1(t))(A_1^*s_1^*(t)+A_2^*s_2^*(t))\}}{\sqrt{P_1P_2}}\\

\; \;=\frac{\sqrt{P_1}A_1^*\mathcal{E}\{s_1(t)\s_1^*(t)\}+0}{\sqrt{P_1P_2}}\\

\; \;= \frac{A_1^*}{\sqrt{P_2}}\; \; \;\; \; \; \;\; \; \;\; \; \;\; \; \; \;\; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \;\; \; \; \; \;\; \; \; \;\; \; \; \;\; \; \;\; \;(13)\]

An initial observation is that these conditions are independent of the complex phase of A

_{2}. Therefore, we can assume A

_{2}to be purely real (so that A

_{2}=|A

_{2}|). So, solving these simultaneous equations (Equations 12 and 13) for real-valued A

_{2}therefore yields:

\[

\; \; \; \; \; \; \; \; \; \; \; \; \; \; A_2 = \sqrt{P_2(1-|\rho|^2)} \; \; \; \; \;\; \; \; \; \; \; \; \;\; \; \; \; \; \;(14)\\

\]

Finally, we rearrange Equation 13 to get:

\[

\; \; \; \; \; \; \; \; \; \; \; \; \; \;\; \;\; \; A_1 = \rho^*\sqrt{P_2} \; \; \; \; \;\; \;\; \;\; \; \; \; \; \; \; \;\; \; \; \; \; \;(15)\\

\]

Putting these results together, it follows that we can construct our two partially correlated signals, x

_{1}(t) and x

_{2}(t), as:

\[

\; \; \; \; \; \; \; \; \;\; \;\; \; x_1(t) = \sqrt{P_1}s_1(t) \\

\; \; \; \; \; \; \; \; \;\; \;\; \; x_2(t) = \sqrt{P_2}ρ∗s1(t)+√(1−|ρ|2)s2(t)ρ∗s1(t)+(1−|ρ|2)s2(t) \; \; \; \; \;\; \;\; \;\; \; \; \; \; \; \; \;(16)\\

\]

This gives us a neat result for 2 signals, but to generalise to N signals, we will need a more general approach. This will be described in Part II of this blog.

To round off this part, here is a simple Matlab (GNU Octave) example:

**Matlab Example**

Code Matlab M - [expand] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 close all; clear all; clc; % Number of samples to generate Nsamps = 100000; % Set the required second order statistics P1 = 2.3; P2 = 0.75; rho = 0.2 - 1j*0.3; % 2x2 (theoretical) covariance matrix Rxx = [P1, sqrt(P1*P2)*rho; sqrt(P1*P2)*conj(rho), P2] % Generate zero-mean, unit-power, uncorrelated Gaussian random signals s1 = randn(1, Nsamps); s2 = randn(1, Nsamps); % Create partially correlated signals x = sqrt(P1)*s1; y = sqrt(P2)*(conj(rho)*s1 + sqrt(1-abs(rho)^2)*s2); % 2x2 (practical) sample covariance matrix Rxx_samp = [x;y]*[x;y]' / Nsamps

The above code will produce an output similar to the following:

Code:

```
Rxx =
2.3000 + 0.0000i 0.2627 - 0.3940i
0.2627 + 0.3940i 0.7500 + 0.0000i
Rxx_samp =
2.3125 + 0.0000i 0.2687 - 0.3962i
0.2687 + 0.3962i 0.7525 - 0.0000i
```

Note that we never expect the sample covariance matrix to exactly match the theoretical covariance matrix. Sample statistics are themselves random variables, having their own probability distributions.