meto
Junior Member level 1
tansig
Hi I'm implementing an NN using Matlab to learn the XOR problem: -
in in out
-1 -1 -1
1 -1 1
-1 1 1
1 1 -1
Its a multi-layer feedforward network, and the neurons are either unipolar 'logsig' or bipolar 'tansig' neurons.
It learns succesfully using logsig neurons, but with tansig the root mean square error converges to 0.5 and levels out there.
As far as I can tell all my calculations are correct(!), i've checked it by hand using a calculator, and created a crude model in excel - all the calculations match up.
I cannot for the life of me figure out why it does this with tansig when it trains successfully with logsig.
Can anyone offer any insights?
thanks
Hi I'm implementing an NN using Matlab to learn the XOR problem: -
in in out
-1 -1 -1
1 -1 1
-1 1 1
1 1 -1
Its a multi-layer feedforward network, and the neurons are either unipolar 'logsig' or bipolar 'tansig' neurons.
It learns succesfully using logsig neurons, but with tansig the root mean square error converges to 0.5 and levels out there.
As far as I can tell all my calculations are correct(!), i've checked it by hand using a calculator, and created a crude model in excel - all the calculations match up.
I cannot for the life of me figure out why it does this with tansig when it trains successfully with logsig.
Can anyone offer any insights?
thanks