Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronic Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Register Log in

BACK PROPAGATION FOR NEURAL NETWORK MAKING DECISION

Status
Not open for further replies.

aqua_25

Newbie level 1
Joined
Apr 12, 2009
Messages
1
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,281
Activity points
1,292
Hello to everybody! I study to college and it's the first time that i work with neural networks. I have to do a project (a simple one though) for making decisions. In general i think i am doing well. My project has to do with an example of a bank giving or not lawns to customers. I have 4 inputs coded for low(2) and high(3) solvency, income etc. For example maybe a customer has high income but low solvency. Anyway, this is my training set. As i have to use a step function i found hardlim in matlab. But i have a problem using it. My code is:
I=[2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3;2 2 2 2 3 3 3 3 2 2 2 2 3 3 3 3; 2 2 3 3 2 2 3 3 2 2 3 3 2 2 3 3;2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3];
O=[0 0 0 0 0 0 0 1 0 0 0 1 0 1 1 1];
net=newff([2 3;2 3;2 3;2 3;],[4,1],{'hardlim','hardlim'},'traingd');
net.traiParam.epochs=2000;
net.trainParam.goal=0.001;
net.trainParam.show=200;
net.trainParam.lr=0.05;
[net,tr]=train(net,I,O);
I am really stuck and if anyone could help me i would really appreciate it. You can either answer me here or my e-mail jpapa_jpapa@yahoo.com. Thank you all even for reading my problem.
 

Status
Not open for further replies.
Toggle Sidebar

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Top