Continue to Site

Welcome to

Welcome to our site! is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Difference between zero-forcing algo and MSE algo

Not open for further replies.


Advanced Member level 1
Jul 2, 2005
Reaction score
Trophy points
Activity points
zero forcing mse

In designing equalizer, what is the difference between zero-forcing algo and MSE algo?
To me, it looks no big difference. According attached fig of text, the difference lies in that MSE algo will "multiply" XT in both sides of equations, compared with zero-forcing algo.

If X is square matrix, we can find (XT)-1, then these two algos should be the same.
If X is not square, then MSE will have advantage of getting longer C matrix, with same information.

Is that right? any other difference?

Added after 13 minutes:


Actually there is a big difference between them...sine they are equalizers i guess we should talk in frequency domain ok.

now if you send a signal from point A to point B and it gets amplified and attenuated after propagation
X(f) : sent signal
Y(f) = H(f) X(f) received signal at the gates of the receiving antenna.
if you want to retrieve the original signal, simple enough, just know the transfer function of the channel H(f) and multiply by (H(f) ^-1)....Wrong

Because the received signal undergoes some additive noise pf some level..(usually of constant power along all frequencies- that's what they call white noise)assuming this kind of noise.
so :
Y(f) = H(f) X(f) + N.

if you multiply by H(f)^-1, then N will be multiplied too. Imagine this, if H(f) is zero or nearly zero, what will happen, the noise component will be amplified, and this will make the retrieval of the original signal X impossible.

in order to take account for the noise we have to incorporate it in our solution, so that no to amplify the noise... if we were working in discrete time, then zero forcing will try to null the effect of the channel or make it zero at these points ( so --> zero forrcing) while MSE will try to minimize the mean squared error to avoid channel nulls.
  • Like
Reactions: bidinho


    Points: 2
    Helpful Answer Positive Rating
it does not make much sense to me. Amplifying the noise is a common issue for linear equalizer.

Why it make a difference between MSE and ZF algo?

Do you have the book : digital communications by proakis,
if you do tell me, cause we can discuss some issues relating to equalizers while going back to some defined pages in that book,
if not i'll give you a link to download the ebook.

The structure of the two algorithms are similar, but multiplications in ZF are simpler (one of the terms comes from a small set of values).
Their convergence properties are different: A necessary condition for the convergence of the ZF algorithm is that the eye must be initially open (peak ISI less than 1). MSE algorithm does not have this type of restriction.


azaz104 said:
Do you have the book : digital communications by proakis,
if you do tell me, cause we can discuss some issues relating to equalizers while going back to some defined pages in that book,
if not i'll give you a link to download the ebook.

Yes. I have bought this book. Please go ahead.

To appreciate the difference between ZF algo and MSE algo, you must understand the criterion they use to design the respective equalizer.

Both are linear equalizers, so mathematically, the equalizer w(n) will be convolved by the channel response h(n) to get the combination c(n) = w(n) * h(n). Where symbol * is for convolution. Note that we work in time-domain.

The ZF criterion tries to find w(n) such that c(n) has only one non-zero sample. This is the same as finding c(n) equals to a delayed impulse signal delta(n-k). Note that this criterion does not regard the noise z(n) in the communication system.

In frequency domain, it is the same that you must find W(z) as the inverse of H(z), since the z-transform of an impulse signal is 1 plus additional phase-term which we can ignore for sake of simplicity. This inversion gives large gain at regions where H(z) is low. In doing so, the equalizer tends to amplify the noise at those regions.

The MMSE criterion tries to find w(n) such that E{w(n) * [h(n) * s(n) + z(n)] - s(n-k)} is minimum. Where z(n) is the noise in the communication system. Note that this MMSE criterion does regard the noise in the design of the respective equalizer. In literature, this MMSE criterion leads to the so-called Wiener reveicer/filter.

As far as I know, in matrix-vector notation, the ZF and MMSE equalizer have similar forms (sorry I'm lazy to look this on my textbook), and indeed for high SNR values (low noise, so the influence of noise could be ignored) the ZF and MMSE equalizers have the same performance.
  • Like
Reactions: bidinho


    Points: 2
    Helpful Answer Positive Rating
You have just say all, but I want add an extra comment. ZF criteria (who can be apply if is satisfied the Lucky condition that say input peak distortion must be < 1) don't take care about channel noise, so if channel response go proximum to zero, cause the phenomena called noise enhacement (output noise >> input noise). The PRO of ZF is the relatively simplicity than MMSE criteria.

Remember that MMSE with infinite taps, have however residue of ISI (the ZF with infinite taps work with zero-ISI) because it work to minimize ISI and noise together.

Note that in absence of noise, theMMSE criteria give the same results of ZF: to use "at best" his taps, MMSE work how the ZF.

An ultimate note (I promise!): in practice to solve the matrix equation who governe the equalizer, is used a recursively algorithm, who give the taps at actual time by subctrate the taps at pror time and a quantity who is function of the gradient of taps. This method require to calculate the expectetion of this gradient: in practice it's impossibile because require to know the channel response (think: if you will know the channel response, it's solve all of your problems! there is no need of equalizer!), so the expectation is removed and the algorithm work directly with the gradient: this is called the Stochastic Gradient Algorithm.

since you have the book by proakis can you tell me if you've read the pages between 616 (linear equalization 10.2) and page 626

I think the most comprehensive book on the topic is Adaptive filters by Hykin.
It covers all the aspects of these filtering techniqures and explains the differences. Also it explains the constaints on MSE filter design.
The most beautiful is discussion on LMS filters.

i read the book by haykin, but it's required that you read three or four chapters before you get the idea of adpative filters!!!
the book by monson hayes is good too, although shorter than the one by haykin.
there is another book by ali sayed, but it's way more advanced.

Look @ Kay's "Introduction to statistical Signal Processing - Estimation Theory" and Porat's "Digital Processing of Random Signals: Theory and Methods".

Not open for further replies.

Part and Inventory Search

Welcome to