您好,欢迎访问三七文档
当前位置:首页 > 商业/管理/HR > 市场营销 > 哈工大机器学习历年考试
1Givethedefinitionsoryourcomprehensionsofthefollowingterms.(12’)1.1TheinductivelearninghypothesisP171.2OverfittingP491.4ConsistentlearnerP1482Givebriefanswerstothefollowingquestions.(15’)2.2Ifthesizeofaversionspaceis||VS,Ingeneralwhatisthesmallestnumberofqueriesmayberequiredbyaconceptlearnerusingoptimalquerystrategytoperfectlylearnthetargetconcept?P272.3Ingenaral,decisiontreesrepresentadisjunctionofconjunctionsofconstrainsontheattributevaluesofinstanse,thenwhatexpressiondoesthefollowingdecisiontreecorrespondsto?3Givetheexplainationtoinductivebias,andlistinductivebiasofCANDIDATE-ELIMINATIONalgorithm,decisiontreelearning(ID3),BACKPROPAGATIONalgorithm.(10’)4Howtosolveoverfittingindecisiontreeandneuralnetwork?(10’)Solution:Decisiontree:及早停止树增长(stopgrowingearlier)后修剪法(post-pruning)NeuralNetwork权值衰减(weightdecay)验证数据集(validationset)OutLookHumidityWindSunnyOvercastRainYesHighNormalYesNoStrongYesWeakNo5ProvethattheLMSweightupdaterule^(()())iitrainiVbVbxperformsagradientdescenttominimizethesquarederror.Inparticular,definethesquarederrorEasinthetext.NowcalculatethederivativeofEwithrespecttotheweighti,assumingthat^()Vbisalinearfunctionasdefinedinthetext.GradientdescentisachievedbyupdatingeachweightinproportiontoiE.Therefore,youmustshowthattheLMStrainingrulealtersweightsinthisproportionforeachtrainingexampleitencounters.(^2,()(()())traintrainbVbtrainingexampleEVbVb)(8’)Solution:AsVtrain(b)ˆV(Successor(b))wecangetE=2ˆ(()())trainVbVb0112233445566ˆ()w+wx+wx+wx+wx+wx+wxVbˆˆ/2(()())(()())/itraintrainiEwVbVbVbVbw=ˆ2(()())trainiVbVbxAsmentionedinLMS:ˆ(()())iitrainiVbVbxWecanget(/)iiiEw/2Therefore,gradientdescentisachievementbyupdatingeachweightinproportionto/iEw;LMSrulesaltersweightsinthisproportionforeachtrainingexampleitencounters.6Trueorfalse:ifdecisiontreeD2isanelaborationoftreeD1,thenD1ismore-general-thanD2.AssumeD1andD2aredecisiontreesrepresentingarbitrarybooleanfuncions,andthatD2isanelaborationofD1ifID3couldextendD1toD2.Iftruegiveaproof;iffalse,acounterexample.(Definition:Letjhandkhbeboolean-valuedfunctionsdefinedoverX.thenjhismore_general_than_or_equal_tokh(writtenjgkhh)Ifandonlyif()[(()1)(()1)]kjxXhxhxthen()()jkjgkkgjhhhhhh)(10’)Thehypothesisisfalse.OnecounterexampleisAXORBwhileifA!=B,trainingexamplesareallpositive,whileifA==B,trainingexamplesareallnegative,then,usingID3toextendD1,thenewtreeD2willbeequivalenttoD1,i.e.,D2isequaltoD1.7Designatwo-inputperceptronthatimplementsthebooleanfunctionAB.Designatwo-layernetworkofperceptronsthatimplementsAXORB.(10’)8Supposethatahypothesisspacecontainingthreehypotheses,1h,2h,3h,andtheposteriorprobabilitiesofthesetypothesesgiventhetrainingdataare0.4,0.3and0.3respectively.Andifanewinstancexisencountered,whichisclassifiedpositiveby1h,butnegativeby2hand3h,thengivetheresultanddetailclassificationcourseofBayesoptimalclassifier.(10’)P1259SupposeSisacollectionoftraining-exampledaysdescribedbyattributesincludingHumidity,whichcanhavethevaluesHighorNormal.AssumeSisacollectioncontaining10examples,[7+,3-].Ofthese10examples,suppose3ofthepositiveand2ofthenegativeexampleshaveHumidity=High,andtheremainderhaveHumidity=Normal.Pleasecalculatetheinformationgainduetosortingtheoriginal10examplesbytheattributeHumidity.(log21=0,log22=1,log23=1.58,log24=2,log25=2.32,log26=2.58,log27=2.8,log28=3,log29=3.16,log210=3.32,)(5’)HumidityS:[7+,3-][4+,1-][3+,2-]HighNormalSolution:(a)HerewedenoteS=[7+,3-],thenEntropy([7+,3-])=227733loglog10101010=0.886;(b)ivvvalues(Humidity)Gain(S,Humidity)=Entropy(S)-Entropy(S)vSSGain(S,a2)Values(Humidity)={High,Normal}{|()}HighSsSHumiditysHigh223322Entropy()=-log-log0.9725555HighS,5HighS=4224411Entropy()=-log-log0.725555NormalS,NormalS=5ThusGain(S,Humidity)=0.886-55(0.972*0.72)1010=0.0410Finishthefollowingalgorithm.(10’)(1)GRADIENT-DESCENT(trainingexamples,)Eachtrainingexampleisapairoftheform,xt,wherexisthevectorofinputvalues,andtisthetargetoutputvalue.isthelearningrate(e.g.,0.05).InitializeeachitosomesmallrandomvalueUntiltheterminationconditionismet,DoInitializeeachitozero.Foreach,xtintraining_examples,DoInputtheinstancextotheunitandcomputetheoutputoForeachlinearunitweighti,DoForeachlinearunitweighti,Doiii(2)FIND-SAlgorithmInitializehtothemostspecifichypothesisinHForeachpositivetraininginstancexForeachattributeconstraintaiinhIfThendonothingElsereplaceaiinhbythenextmoregeneralconstraintthatissatisfiedbyxOutputhypothesish1.Whatisthedefinitionoflearningproblem?(5)Use“acheckerslearningproblem”asanexampletostatehowtodesignalearningsystem.(15)Answer:AcomputerprogramissaidtolearnfromexperienceEwithrespecttosomeclassoftasksTandperformancemeasureP,ifitsperformanceattasksinT,asmeasuredbyP,improveswithexperience.(5)Example:Acheckerslearningproblem:T:playcheckers(1)P:percentageofgameswoninatournament(1)E:opportunitytoplayagainstitself(1)Todesignalearningsystem:Step1:ChoosingtheTrainingExperience(4)Acheckerslearningproblem:TaskT:playingcheckersPerformancemeasureP:percentofgameswoninth
本文标题:哈工大机器学习历年考试
链接地址:https://www.777doc.com/doc-6041658 .html