CVPR2019| 9篇CVPR论文开源代码(行人检测/物体检测/3D Face等)
前段时间,计算机视觉顶会CVPR 2019 公布了接收结果,极市也对此做了相关报道:1300篇!CVPR2019接收结果公布,你中了吗?。目前官方已公布了接收论文列表,极市已汇总目前公开的所有论文链接及code(目前已更新680篇),今日更新论文如下:
CVPR2019 全部论文汇总:
https://github.com/extreme-assistant/cvpr2019
CVPR2019 论文解读
http://bbs.cvmart.net/topics/287/cvpr2019
1.Pedestrian Detection with Autoregressive Network Phases
具有自回归网络阶段的行人检测
作者:Garrick Brazil, Xiaoming Liu
论文链接:https://arxiv.org/abs/1812.00440
源码链接:https://github.com/garrickbrazil/AR-Ped
2.MVF-Net: Multi-View 3D Face Morphable Model Regression
MVF-Net:多视图3D面部可变模型回归
作者:Fanzi Wu, Linchao Bao, Yajing Chen, Yonggen Ling, Yibing Song, Songnan Li, King Ngi Ngan, Wei Liu
论文链接:https://arxiv.org/abs/1904.04473
源码链接:https://github.com/Fanziapril/mvfnet
3.Detecting Overfitting of Deep Generators via Latent Recovery
通过潜在恢复检测深度发电机的过度拟合
作者:Ryan Webster, Julien Rabin, Loic Simon, Frederic Jurie
论文链接:https://arxiv.org/pdf/1901.03396v1.pdf
源码链接:https://github.com/ryanwebster90/gen-overfitting-latent-recovery
4.Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes
用于静止或动态场景的无监督深度极线流
作者:Yiran Zhong, Pan Ji, Jianyuan Wang, Yuchao Dai, Hongdong Li
论文链接:https://arxiv.org/pdf/1904.03848v1.pdf
源码链接:https://github.com/yiranzhong/EPIflow
5.Isospectralization, or how to hear shape, style, and correspondence
Isospectization,或如何听取形状,风格和通信
作者:Luca Cosmo, Mikhail Panine, Arianna Rampini, Maks Ovsjanikov, Michael M. Bronstein, Emanuele Rodolà
论文链接:https://arxiv.org/abs/1811.11465v2
源码链接:https://github.com/lcosmo/isospectralization
6.Exploring the Bounds of the Utility of Context for Object Detection
探讨用于物体检测的上下文效用的界限
作者:Ehud Barnea, Ohad Ben-Shahar
论文链接:https://arxiv.org/abs/1711.05471v4
源码链接:https://github.com/EhudBarnea/ContextAnalysis
7.Deformable ConvNets v2: More Deformable, Better Results
Deformable ConvNets v2:更加可变形,更好的结果
作者:Xizhou Zhu, Han Hu, Stephen Lin, Jifeng Dai
论文链接:https://arxiv.org/pdf/1811.11168v2.pdf
源码链接:https://github.com/msracver/Deformable-ConvNets
8.From Recognition to Cognition: Visual Commonsense Reasoning(Oral)
从认知到认知:视觉常识推理
作者:Rowan Zellers, Yonatan Bisk, Ali Farhadi, Yejin Choi
论文链接:https://arxiv.org/pdf/1811.10830v2.pdf
源码链接:https://github.com/rowanz/r2c
9.Unsupervised Visual Domain Adaptation: A Deep Max-Margin Gaussian Process Approach(Oral)
Unsupervised Visual Domain Adaptation:深度最大边缘高斯过程方法
作者:Minyoung Kim, Pritish Sahu, Behnam Gholami, Vladimir Pavlovic
论文链接:https://arxiv.org/pdf/1902.08727.pdf
源码链接:https://github.com/seqam-lab/GPDA