您好,欢迎访问三七文档
当前位置:首页 > 电子/通信 > 电子设计/PCB > 深度学习框架比较-caffe的理念及使用
IntroductionofCurrentDeepLearningSoftwarePackagesThreePopularones1.Caffe2.Theano3.TensorFlow:Caffe(ConvolutionalArchitectureForFeatureExtraction)CreatedbyYangqingJia(贾扬清),UCBerkeley.WritteninC++,hasPythonandMATLABinterface.2.Githubpage::(CUDA+Caffe):Ouxinyu.github.io/Blogs/2014723001.htmlAnatomyofCaffe●Blob:Storesdataandderivatives●Layer:TransformsBottomblobstotopblobs●Net:Manylayers;computesgradientsviaforward/backwardBlobLayerNetBlobABlobisawrapperovertheactualdatabeingprocessedandpassedalongbyCaffe,andalsounderthehoodprovidessynchronizationcapabilitybetweentheCPUandtheGPU.Theconventionalblobdimensionsforbatchesofimagedataare(numberN)x(channelK)x(heightH)x(widthW).Foraconvolutionlayerwith96filtersof11x11spatialdimensionand3inputstheblobis96x3x11x11.Foraninnerproduct/fully-connectedlayerwith1000outputchannelsand1024inputchannelstheparameterblobis1000x1024.LayerThelayeristheessenceofamodelandthefundamentalunitofcomputation.Layersconvolvefilters,pool,takeinnerproducts,applynonlinearitieslikerectified-linearandsigmoidandotherelement-wisetransformations,normalize,loaddata,andcomputelosseslikesoftmaxandhinge.Case:ConvolutionLayerNetThenetjointlydefinesafunctionanditsgradientbycompositionandauto-differentiation.Thecompositionofeverylayer’soutputcomputesthefunctiontodoagiventask,andthecompositionofeverylayer’sbackwardcomputesthegradientfromthelosstolearnthetask.name:LogReglayer{name:mnisttype:Datatop:datatop:labeldata_param{source:input_leveldbbatch_size:64}}layer{name:iptype:InnerProductbottom:datatop:ipinner_product_param{num_output:2}}layer{name:losstype:SoftmaxWithLossbottom:ipbottom:labeltop:loss}HowtouseCaffe?Just4steps!!1.Convertdata(runascript)2.Definenet(editprototxt)3.Definesolver(editprototxt)4.Train(withpretrainedweights)(runascript)TakeCifar10imageclassificationforexample.●DataLayerreadingfromLMDBistheeasiest,createLMDBusingconvert_imageset●Needtextfilewhereeachlineis“[path/to/image.jpeg][label]”(useimageDataLayerread)●CreateHDF5fileyourselfusingh5py(useHDF5Layerread)Step1:ConvertDataforCaffeConvertDataonCIFAR10Step2:DefineNet(cifar10_quick_train_test.prototxt)LayernameBlobsnameLearningrateofweightLearningrateofbiasInputimagenumperiterationTrainingimagedataDatatypeBlobsnameNumberofoutputclassOutputaccuracyduringtestOutputlossduringtrainIfyoufinetunesomepre-trainmodel,youcansetlr_mul=0Step2:DefineNet(cifar10_quick_train_test.prototxt)VisualizetheDefinedNetwork:DefineSolver(cifar10_quick_solver.prototxt)#reducethelearningrateafter8epochs(4000iters)byafactorof10#Thetrain/testnetprotocolbufferdefinitionnet:examples/cifar10/cifar10_quick_train_test.prototxt“#test_iterspecifieshowmanyforwardpassesthetestshouldcarryout.#InthecaseofMNIST,wehavetestbatchsize100and100testiterations,#coveringthefull10,000testingimages.test_iter:100#Carryouttestingevery500trainingiterations.test_interval:500#Thebaselearningrate,momentumandtheweightdecayofthenetwork.base_lr:0.001momentum:0.9weight_decay:0.004#Thelearningratepolicylr_policy:fixed“#Displayevery100iterationsdisplay:100#Themaximumnumberofiterationsmax_iter:4000#snapshotintermediateresultssnapshot:4000snapshot_prefix:examples/cifar10/cifar10_quick“#solvermode:CPUorGPUsolver_mode:GPUDefinedNetfileKeyparametersImportantparametersStep4:TrainWriteashellfile(train_quick.sh):ThenenjoyacupofcaffeModelZoo(Pre-trainedModel+Finetune)WecanfinetunethesemodelsordofeatureextractionbasedonthesemodelsSometricks/skillsabouttrainingCaffe[1]NeuralNetworks:tricksofthetrade1.DataAugmentationtoenlargetrainingsamples2.ImagePre-Processing3.NetworkInitializations4.DuringTraining5.ActivationFunctions6.Regularizationsmoredetailscanreferto[1,2][2]!!DataAugmentationTogetridofocclusionandscalechange,likevisualtrackingDataAugmentationDataAugmentationImagePre-ProcessingStep1:subtractthedataset-meanvalueineachchannelStep2:swapchannelsfromRGBtoBGRStep3:moveimagechannelstooutermostdimensionStep4:rescalefrom[0,1]to[0,255]NetworkInitializationsDuringTrainingDropout[1]BatchNormalization[2]helpalleviateoverfittingduringtraininginCaffe[1]Srivastava,Nitish,etal.Dropout:asimplewaytopreventneuralnetworksfromoverfitting.JournalofMachineLearningResearch15.1(2014):1929-1958.[2]S.IoffeandC.Szegedy.Batchnormalization:Acceleratingdeepnetworktrainingbyreducinginternalcovariateshift.arXivpreprintarXiv:1502.03167,2015OverfittingProsandConsofCaffeApracticalexampleofCaffe1.Objectdetection—RCNN/Fast-RCNN/Faster-RCNNCaffe+MATLABlr=0.1xbaselearningratelr=baselearningrate2.Theano1.Overview:APythonlibrarythatallowstodefine,optimizeandevaluatemathematicalexpression.Fro
本文标题:深度学习框架比较-caffe的理念及使用
链接地址:https://www.777doc.com/doc-3266289 .html