您好,欢迎访问三七文档
KinectFusion:Real-time3DReconstructionandInteractionUsingaMovingDepthCamera*ShahramIzadi1,DavidKim1;3,OtmarHilliges1,DavidMolyneaux1;4,RichardNewcombe2,PushmeetKohli1,JamieShotton1,SteveHodges1,DustinFreeman1;5,AndrewDavison2,AndrewFitzgibbon11MicrosoftResearchCambridge,UK2ImperialCollegeLondon,UK3NewcastleUniversity,UK4LancasterUniversity,UK5UniversityofToronto,CanadaFigure1:KinectFusionenablesreal-timedetailed3DreconstructionsofindoorscenesusingonlythedepthdatafromastandardKinectcamera.A)userpointsKinectatcoffeetablescene.B)Phongshadedreconstructed3Dmodel(thewireframefrustumshowscurrenttracked3DposeofKinect).C)3DmodeltexturemappedusingKinectRGBdatawithreal-timeparticlessimulatedonthe3Dmodelasreconstructionoccurs.D)Multi-touchinteractionsperformedonanyreconstructedsurface.E)Real-timesegmentationand3Dtrackingofaphysicalobject.ABSTRACTKinectFusionenablesauserholdingandmovingastandardKinectcameratorapidlycreatedetailed3Dreconstructionsofanindoorscene.OnlythedepthdatafromKinectisusedtotrackthe3Dposeofthesensorandreconstruct,geomet-ricallyprecise,3Dmodelsofthephysicalsceneinreal-time.ThecapabilitiesofKinectFusion,aswellasthenovelGPU-basedpipelinearedescribedinfull.Weshowusesofthecoresystemforlow-costhandheldscanning,andgeometry-awareaugmentedrealityandphysics-basedinteractions.Novelex-tensionstothecoreGPUpipelinedemonstrateobjectseg-mentationanduserinteractiondirectlyinfrontofthesensor,withoutdegradingcameratrackingorreconstruction.Theseextensionsareusedtoenablereal-timemulti-touchinterac-tionsanywhere,allowinganyplanarornon-planarrecon-structedphysicalsurfacetobeappropriatedfortouch.ACMClassification:H5.2[InformationInterfacesandPre-sentation]:UserInterfaces.I4.5[ImageProcessingandCom-puterVision]:Reconstruction.I3.7[ComputerGraphics]:Three-DimensionalGraphicsandRealism.Generalterms:Algorithms,Design,HumanFactors.Keywords:3D,GPU,SurfaceReconstruction,Tracking,DepthCameras,AR,Physics,Geometry-AwareInteractions*ResearchconductedatMicrosoftResearchCambridge,UKPermissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalorclassroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributedforprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitationonthefirstpage.Tocopyotherwise,torepublish,topostonserversortoredistributetolists,requirespriorspecificpermissionand/orafee.UIST’11,October16-19,2011,SantaBarbara,CA,USA.Copyright2011ACM978-1-4503-0716-1/11/10...$10.00.INTRODUCTIONWhiledepthcamerasarenotconceptuallynew,Kinecthasmadesuchsensorsaccessibletoall.Thequalityofthedepthsensing,giventhelow-costandreal-timenatureofthede-vice,iscompelling,andhasmadethesensorinstantlypopu-larwithresearchersandenthusiastsalike.TheKinectcamerausesastructuredlighttechnique[8]togeneratereal-timedepthmapscontainingdiscreterangemea-surementsofthephysicalscene.Thisdatacanberepro-jectedasasetofdiscrete3Dpoints(orpointcloud).EventhoughtheKinectdepthdataiscompelling,particularlycom-paredtoothercommerciallyavailabledepthcameras,itisstillinherentlynoisy(seeFigures2Band3left).Depthmea-surementsoftenfluctuateanddepthmapscontainnumerous‘holes’wherenoreadingswereobtained.Togenerate3Dmodelsforuseinapplicationssuchasgam-ing,physics,orCAD,higher-levelsurfacegeometryneedstobeinferredfromthisnoisypoint-baseddata.OnesimpleapproachmakesstrongassumptionsabouttheconnectivityofneighboringpointswithintheKinectdepthmaptogen-erateameshrepresentation.This,however,leadstonoisyandlow-qualitymeshesasshowninFigure2C.Asimpor-tantly,thisapproachcreatesanincompletemesh,fromonlyasingle,fixedviewpoint.Tocreateacomplete(orevenwa-tertight)3Dmodel,differentviewpointsofthephysicalscenemustbecapturedandfusedintoasinglerepresentation.Thispaperpresentsanovelinteractivereconstructionsys-temcalledKinectFusion(seeFigure1).ThesystemtakeslivedepthdatafromamovingKinectcameraand,inreal-time,createsasinglehigh-quality,geometricallyaccurate,3Dmodel.AuserholdingastandardKinectcameracanmovewithinanyindoorspace,andreconstructa3Dmodelofthephysicalscenewithinseconds.Thesystemcontin-PaperSession:3DUIST’11,October16–19,2011,SantaBarbara,CA,USA559Figure2:A)RGBimageofscene.B)NormalsextractedfromrawKinectdepthmap.C)3DMeshcreatedfromasingledepthmap.DandE)3DmodelgeneratedfromKinectFusionshowingsurfacenormals(D)andrenderedwithPhongshading(E).uouslytracksthe6degrees-of-freedom(DOF)poseofthecameraandfusesnewviewpointsofthesceneintoaglobalsurface-basedrepresentation.AnovelGPUpipelineallowsforaccuratecameratrackingandsurfacereconstructionatin-teractivereal-timerates.Thispaperdetailsthecapabilitiesofournovelsystem,aswellastheimplementationoftheGPUpipelineinfull.WedemonstratecoreusesofKinectFusionasalow-costhandheldscanner,andpresentnovelinteractivemethodsforsegmentingphysicalobjectsofinterestfromthereconstructedscene.Weshowhowareal-time3Dmodelcanbeleveragedforgeometry-awareaugmentedreality(AR)andphysics-basedinteractions,wherevirtualworldsmorerealisticallymergeandinteractwiththereal.Placingsuchsystemsintoaninteractioncontext,whereusersneedtodynamicallyinteractinfrontofthesensor,reveals
本文标题:KinectFusion-- real-time 3D reconstruction and int
链接地址:https://www.777doc.com/doc-3886633 .html