Sie sind auf Seite 1von 3

26/12/2016

trainlm(NeuralNetworkToolbox)

trainlm
LevenbergMarquardtbackpropagation

Syntax
[net,TR]=trainlm(net,Pd,Tl,Ai,Q,TS,VV,TV)
info=trainlm(code)

Description
trainlmisanetworktrainingfunctionthatupdatesweightandbiasvaluesaccordingtoLevenbergMarquardt
optimization.
trainlm(net,Pd,Tl,Ai,Q,TS,VV,TV)takestheseinputs,
netNeuralnetwork.
PdDelayedinputvectors.
TlLayertargetvectors.
AiInitialinputdelayconditions.
QBatchsize.
TSTimesteps.
VVEitheremptymatrix[]orstructureofvalidationvectors.
TVEitheremptymatrix[]orstructureofvalidationvectors.

andreturns,
netTrainednetwork.
TRTrainingrecordofvariousvaluesovereachepoch:
TR.epochEpochnumber.
TR.perfTrainingperformance.
TR.vperfValidationperformance.
TR.tperfTestperformance.
TR.muAdaptivemuvalue.

Trainingoccursaccordingtothetrainlm'strainingparametersshownherewiththeirdefaultvalues:
net.trainParam.epochs100Maximumnumberofepochstotrain
net.trainParam.goal0Performancegoal
net.trainParam.max_fail5Maximumvalidationfailures
http://wwwrohan.sdsu.edu/doc/matlab/toolbox/nnet/trainlm.html

1/3

26/12/2016

trainlm(NeuralNetworkToolbox)

net.trainParam.mem_reduc1Factortouseformemory/speed

tradeoff
net.trainParam.min_grad1e10Minimumperformancegradient
net.trainParam.mu0.001InitialMu
net.trainParam.mu_dec0.1Mudecreasefactor
net.trainParam.mu_inc10Muincreasefactor
net.trainParam.mu_max1e10MaximumMu
net.trainParam.show25Epochsbetweenshowingprogress
net.trainParam.timeinfMaximumtimetotraininseconds

Dimensionsforthesevariablesare:
PdNoxNixTScellarray,eachelementP{i,j,ts}isaDijxQmatrix.
TlNlxTScellarray,eachelementP{i,ts}isaVixQmatrix.
AiNlxLDcellarray,eachelementAi{i,k}isanSixQmatrix.

where
Ni=net.numInputs
Nl=net.numLayers
LD=net.numLayerDelays
Ri=net.inputs{i}.size
Si=net.layers{i}.size
Vi=net.targets{i}.size
Dij=Ri*length(net.inputWeights{i,j}.delays)

IfVVorTVisnot[],itmustbeastructureofvectors,
VV.PD,TV.PDValidation/testdelayedinputs.
VV.Tl,TV.TlValidation/testlayertargets.
VV.Ai,TV.AiValidation/testinitialinputconditions.
VV.Q,TV.QValidation/testbatchsize.
VV.TS,TV.TSValidation/testtimesteps.

Validationvectorsareusedtostoptrainingearlyifthenetworkperformanceonthevalidationvectorsfailsto
improveorremainsthesameformax_failepochsinarow.Testvectorsareusedasafurthercheckthatthe
networkisgeneralizingwell,butdonothaveanyeffectontraining.
trainlm(code)returnsusefulinformationforeachcodestring:
http://wwwrohan.sdsu.edu/doc/matlab/toolbox/nnet/trainlm.html

2/3

26/12/2016

trainlm(NeuralNetworkToolbox)

'pnames'Namesoftrainingparameters.
'pdefaults'Defaulttrainingparameters.

NetworkUse
Youcancreateastandardnetworkthatusestrainlmwithnewff,newcf,ornewelm.
Toprepareacustomnetworktobetrainedwithtrainlm:
1.Setnet.trainFcnto'trainlm'.Thiswillsetnet.trainParamtotrainlm'sdefaultparameters.
2.Setnet.trainParampropertiestodesiredvalues.
Ineithercase,callingtrainwiththeresultingnetworkwilltrainthenetworkwithtrainlm.
Seenewff,newcf,andnewelmforexamples.

Algorithm
trainlmcantrainanynetworkaslongasitsweight,netinput,andtransferfunctionshavederivativefunctions.
BackpropagationisusedtocalculatetheJacobianjXofperformanceperfwithrespecttotheweightandbias
variablesX.EachvariableisadjustedaccordingtoLevenbergMarquardt,
jj=jX*jX
je=jX*E
dX=(jj+I*mu)\je

whereEisallerrorsandIistheidentitymatrix.
Theadaptivevaluemuisincreasedbymu_incuntilthechangeaboveresultsinareducedperformancevalue.The
changeisthenmadetothenetworkandmuisdecreasedbymu_dec.
Theparametermem_reducindicateshowtousememoryandspeedtocalculatetheJacobianjX.Ifmem_reducis1,
thentrainlmrunsthefastest,butcanrequirealotofmemory.Increasingmem_reducto2cutssomeofthe
memoryrequiredbyafactoroftwo,butslowstrainlmsomewhat.Highervaluescontinuetodecreasethe
amountofmemoryneededandincreasetrainingtimes.
Trainingstopswhenanyoftheseconditionsoccur:
Themaximumnumberofepochs(repetitions)isreached.
Themaximumamountoftimehasbeenexceeded.
Performancehasbeenminimizedtothegoal.
Theperformancegradientfallsbelowmingrad.
muexceedsmu_max.
Validationperformancehasincreasedmorethanmax_failtimessincethelasttimeitdecreased(when
usingvalidation).

SeeAlso
newff,newcf,traingd,traingdm,traingda,traingdx

http://wwwrohan.sdsu.edu/doc/matlab/toolbox/nnet/trainlm.html

3/3

Das könnte Ihnen auch gefallen