Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Help: Neural Network Multilayer feedforward networks

Status
Not open for further replies.

sanjay

Full Member level 1
Joined
Jul 4, 2003
Messages
98
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,286
Activity points
1,012
Hi all,

I just wanted to clear a doubt actually. Having studied neural networks and the different architecture, I fail to understand as to "HOW CAN multilayer feedforwark networks CLASSIFY"

Surely, because of the learning rule of back-propagation being implemented and the MSE, the basic goes to getting the input values close to the target values. Pertaining to that, that means if we have like three different sets of target for three different outputs, and I feed an input signal, then the network will try to get the output close in resemblance to the target, wouldn't it. (As per the theory I read)

But then, if this is so, HOW CAN IT BE USED TO CLASSIFY ?
Can somebody help me out, maybe I am missing some keytips or something

Regards
 

Assume a simple SLP. The SLP system essentially uses thresholding neurons to model a linear boundary seperating regions of "true" and "false", ie they are represented as a straight line or plane or hyperplane in multidimensional space, and therefore arent capable of defining regions of complex shapes. These can be rectified using MLPs. You can implement AND/OR logic using the 2nd layer to draw boundaries to represent whatever region you want. For convex regions, a 3rd layer may be required.

Take for example, if you want to define an enclosed square as a region on a 2-dimensional space, you have 4 threshold neurons to define the borders of the square, and then use a 2nd layer threshold neuron, with threshold at 3.5 to implement a simple AND function to get your output. Other regions are similarly defined using similar neurons in the 2nd layer.

So, if you have defined regions for the entire multidimensional space, and the MLP is correctly tuned, when you feed it a vector, the output neuron corresponding to the region the vector belongs to will be "1", with all other output neurons giving "0". There you have your classifier.

The learning rules do not classify. They simply provide a means to adjust these boundaries based on a set of input-output data pairs. Ie, they are used for "training" the system.
 

    sanjay

    Points: 2
    Helpful Answer Positive Rating
Checkmate's absolutely right, but I think that's rather complicated. It's easier for you in this way, sanjay :

_ You can consider MLPs as an approximation tools. They can theoretically approximate any functions with proper structure.

_ Your clasifier can be considered as a function. Some inputs result in one value, another group of inputs result in another value, .....

In such a manner, you'll get classifiers with MLPs.
Hope you enjoy.
 

    sanjay

    Points: 2
    Helpful Answer Positive Rating
Hi checkmate and tienpfiev

I did understand wht you guys were saying, that is something similar to what I saw while going through MATLAB NN toolbox.

The classifier part, is as you said give a value of weight 1 to the neuron which belongs to a particular class and 0 to others.

As per tienpfiev
quoted "_ You can consider MLPs as an approximation tools. They can theoretically approximate any functions with proper structure."

Does that mean, in order to get classification with MLP, you have to first do approximation (well obvious from how they work) and then use classifiers (maybe LVQ method) to get some classifcation ?

Sorry, but just in doubt, so wanted to clear it out

best regards
 

This example may help, I hope, to make it clearer:

I want to classify 5 groups, for example. I will use numbers, they can represent for whatever I want to classify.

Group 1 : 1,2,3,4,5
2 : 6,7,8,9,10,11
3 : 15,17,19,20
4 : 25,30,31,34
5 : 35,36,37,38,39,40,41,42,43,44,45

I then assign each group with an output

Group 1 : 1
2 : 3
3 :-5
4 : 7
5 : 11

Use these data set to train a MLP, we get an approximate function y = f(x) that looks like a multisteps function. Other values not in data set will be interpolated as the MLP learn the data set. That's what we get : an approximation.

Now, when we use it as a classifier, whenever you get "-5" in ouput, you know that your input belongs to gourp 3, for example.

With higher dimension data, it's more complex to imagine, but somthing like that.

Is it ok now, sanjay ?
 

    sanjay

    Points: 2
    Helpful Answer Positive Rating
Hi tienpfiev,

Yup, I think I am gonna give this example a try in Matlab and see just to clear myself on this concept.
:)

Shall get back soon with the questions, IF ANY hehe

Best regards
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top