Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Neural Network BackPropagation Training question

Status
Not open for further replies.

sanjay

Full Member level 1
Joined
Jul 4, 2003
Messages
98
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,286
Activity points
1,012
Hi all,

While going through some examples given by the neural network toolbox in matlab. I came across an application with deals with character recognition. For those intrested, the scrip is named appcr1. (To view the code, edit appcr1)

It shows that, network trained with noise performs better with network trained without noise. The question I have is or rather CONFUSION is that, for the network they train with Noise.

They first train it with Noise, and then train it again with pure "CLEAN" signals, so that the network can identify clean signals as well.Surely, as per my beginners knowledge, if we simulate the network, now with both clean and noisy signals, the network would only be able to give better performance with clean signals only, not so ? (My reason: Since it was last trained with CLEAN signals, coz now it has learned the weights and adjusted them accordingly).

If this is so, then why does it give better performance, when simulated with noise ?
Just a basic beginner question, needing clearification.

Regards
 

I saw a similar application in the book "DIgital Image Processing" by Gonzalez concerning Object recognition and it said it is prefered to train the NN with clean patterns first and then with noisy patterns by this it can recongnize both.

So the MATLAB application u are talking about may be using Batch training so the order will not affect the training
 

in my opinion, the noise training is like dithering in feedback system.
there is usually some "dead-zone" or "offset" in feedback system, if u add some noise in the feedback loop, u usually can get a small response, and the "dead-zone" can be smaller. in other words, there is sometimes that the weighting are not adjusted even if ur pattern number is large. this is similar to the "dead-zone".
 

For nueral networks, the idea is to reduce the amount of estimate error based on iteratively training on a certain amount of examples. So when you first traing with noisy sets, you can view this as some sort of coarse tuning of the neural nets. Then with noiseless sets, it further fine tunes neural nets.

Conversely, it should also be true as well. Also if both datasets are combined.
 

This is my own opinion. Training with noisy sets gives a fuzzy classification mapping. But this fuzzy mapping has to be centred around the correct noiseless mapping. Training with noiseless sets allows the fuzzy mapping to be shifted in the direction of the correct noiseless mapping.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top