Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

2d normal with matlab

Status
Not open for further replies.

amir1981

Newbie level 5
Joined
Oct 2, 2005
Messages
10
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,281
Activity points
1,375
2d normal

hi
how can genarte 2d normal distribution(not independent) with matlab
is it any algorithem ?
 

bivariate normal distribution matlab

Simulation of dependent random variables using copulas

MATLAB is an ideal tool for running simulations that incorporate random inputs or noise. The Statistics Toolbox provides functions to create sequences of random data according to many common univariate distributions. The Toolbox also includes a few functions to generate random data from multivariate distributions, such as the multivariate normal and multivariate t. However, there is no built-in way to generate data from multivariate distributions that have complicated relationships among the variables, or where the individual variables are from different distributions.

Recently, copulas have become popular in simulation models. Copulas are functions that describe dependencies among variables, and provide a way to create distributions to model correlated multivariate data. Using a copula, a data analyst can construct a multivariate distribution by specifying marginal univariate distributions, and choosing a particular copula to provide a correlation structure between variables. Bivariate distributions, as well as distributions in higher dimensions, are possible. In this demo, we discuss how to use copulas to generate dependent multivariate random data in MATLAB, using the Statistics Toolbox.
Contents

* Dependence between simulation inputs
* A more general method for constructing dependent bivariate distributions
* Rank correlation coefficents
* Copulas
* t copulas
* Higher-order copulas
* Copulas and empirical marginal distributions

Dependence between simulation inputs

One of the design decisions for a Monte-Carlo simulation is a choice of probability distributions for the random inputs. Selecting a distribution for each individual variable is often straightforward, but deciding what dependencies should exist between the inputs may not be. Ideally, input data to a simulation should reflect what is known about dependence among the real quantities being modelled. However, there may be little or no information on which to base any dependence in the simulation, and in such cases, it is a good idea to experiment with different possibilities, in order to determine the model's sensitivity.

However, it can be difficult to actually generate random inputs with dependence when they have distributions that are not from a standard multivariate distribution. Further, some of the standard multivariate distributions can model only very limited types of dependence. It's always possible to make the inputs independent, and while that is a simple choice, it's not always sensible and can lead to the wrong conclusions.

For example, a Monte-Carlo simulation of financial risk might have random inputs that represent different sources of insurance losses. These inputs might be modeled as lognormal random variables. A reasonable question to ask is how dependence between these two inputs affects the results of the simulation. Indeed, it might be known from real data that the same random conditions affect both sources, and ignoring that in the simulation could lead to the wrong conclusions.

Simulation of independent lognormal random variables is trivial. The simplest way would be to use the LOGNRND function. Here, we'll use the MVNRND function to generate n pairs of independent normal random variables, and then exponentiate them. Notice that the covariance matrix used here is diagonal, i.e., independence between the columns of Z.

n = 1000;
sigma = .5;
SigmaInd = sigma.^2 .* [1 0; 0 1]

SigmaInd =

0.2500 0
0 0.2500


ZInd = mvnrnd([0 0], SigmaInd, n);
XInd = exp(ZInd);
plot(XInd:),1),XInd:),2),'.'); axis equal; axis([0 5 0 5]);

Dependent bivariate lognormal r.v.'s are also easy to generate, using a covariance matrix with non-zero off-diagonal terms.

rho = .7;
SigmaDep = sigma.^2 .* [1 rho; rho 1]

SigmaDep =

0.2500 0.1750
0.1750 0.2500


ZDep = mvnrnd([0 0], SigmaDep, n);
XDep = exp(ZDep);

A second scatter plot demonstrates the difference between these two bivariate distributions.

plot(XDep:),1),XDep:),2),'.'); axis equal; axis([0 5 0 5]);

It's clear that there is more of a tendency in the second dataset for large values of X1 to be associated with large values of X2, and similarly for small values. This dependence is determined by the correlation parameter, rho, of the underlying bivariate normal. The conclusions drawn from the simulation could well depend on whether or not X1 and X2 were generated with dependence or not.

The bivariate lognormal distribution is a simple solution in the case, and of course easily generalizes to higher dimensions and cases where the marginal distributions are _different_ lognormals. Other multivariate distributions also exist, for example, the multivariate t and the Dirichlet distributions are used to simulate dependent t and beta random variables, respectively. But the list of simple multivariate distributions is not long, and they only apply in cases where the marginals are all in the same family (or even the exact same distributions). This can be a a real limitation in many situations.
A more general method for constructing dependent bivariate distributions

Although the above construction that creates a bivariate lognormal is simple, it serves to illustrate a method which is more generally applicable. First, we generate pairs of values from a bivariate normal distribution. There is statistical dependence between these two variables, and each has a normal marginal distribution. Next, a transformation (the exponential function) is applied separately to each variable, changing the marginal distributions into lognormals. The transformed variables still have a statistical dependence.

If a suitable transformation could be found, this method could be generalized to create dependent bivariate random vectors with other marginal distributions. In fact, a general method of constructing such a transformation does exist, although not as simple as just exponentiation.

By definition, applying the normal CDF (denoted here by PHI) to a standard normal random variable results in a r.v. that is uniform on the interval [0, 1]. To see this, if Z has a standard normal distribution, then the CDF of U = PHI(Z) is

Pr{U <= u} = Pr{PHI(Z) <= u} = Pr{Z <= PHI^(-1)(u)} = u,

and that is the CDF of a U(0,1) r.v. Histograms of some simulated normal and transformed values demonstrate that fact.

n = 1000;
z = normrnd(0,1,n,1);
hist(z,-3.75:.5:3.75); xlim([-4 4]);
title('1000 Simulated N(0,1) Random Values');
xlabel('z'); ylabel('Frequency');



For more information go to:
**broken link removed**
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top