資源描述:
《Very Deep Convolutional Networks for Large-Scale Image Recognition.pdf》由會員上傳分享,免費在線閱讀,更多相關(guān)內(nèi)容在學(xué)術(shù)論文-天天文庫。
1、VeryDeepConvolutionalNetworksforLarge-ScaleImageRecognitionKarenSimonyanAndrewZissermanVisualGeometryGroup,UniversityofOxford{karen,az}@robots.ox.ac.ukAbstractInthisworkweinvestigatetheeffectoftheconvolutionalnetworkdepthonitsaccuracyinthelarge-scaleimagerecogni
2、tionsetting.Ourmaincontributionisathoroughevaluationofnetworksofincreasingdepth,whichshowsthatasigni?-cantimprovementontheprior-artcon?gurationscanbeachievedbypushingthedepthto16–19weightlayers.These?ndingswerethebasisofourImageNetChallenge2014submission,whereou
3、rteamsecuredthe?rstandthesecondplacesinthelocalisationandclassi?cationtracksrespectively.1IntroductionConvolutionalnetworks(ConvNets)haverecentlyenjoyedagreatsuccessinlarge-scalevisualrecognition[10,16,17,19]whichhasbecomepossibleduetothelargepublicimagereposito
4、ries,suchasImageNet[4],andhigh-performancecomputingsystems,suchasGPUsorlarge-scaledis-tributedclusters[3].Inparticular,animportantroleintheadvanceofdeepvisualrecognitionarchi-tectureshasbeenplayedbytheImageNetLarge-ScaleVisualRecognitionChallenge(ILSVRC)[1],whic
5、hhasservedasatestbedforafewgenerationsoflarge-scaleimageclassi?cationsystems,fromhigh-dimensionalshallowfeatureencodings[13](thewinnerofILSVRC-2011)todeepCon-vNets[10](thewinnerofILSVRC-2012).WithConvNetsbecomingmoreofacommodityinthecomputervision?eld,anumberofa
6、ttemptshavebeenmadetoimprovetheoriginalarchitectureof[10]inabidtoachievebetteraccuracy.Forinstance,thebest-performingsubmissionstotheILSVRC-2013[16,19]utilisedsmallerreceptivewindowsizeandsmallerstrideofthe?rstconvolutionallayer.Anotherlineofimprovementsdealtwit
7、htrainingandtestingthenetworksdenselyoverthewholeimageandovermultiplescales[7,16].arXiv:1409.1556v2[cs.CV]15Sep2014Inthispaper,weaddressanotherimportantaspectofConvNetarchitecturedesign–itsdepth.Tothisend,we?xotherparametersofthearchitecture,andsteadilyincreaset
8、hedepthofthenetworkbyaddingmoreconvolutionallayers.Therestofthepaperisorganisedasfollows.InSect.2,wedescribeourConvNetcon?gurations.Thedetailsoftheimageclassi?cationt