- Image Classificaiton Task와 관련하여 1998년부터 2021년까지 제안된 다양한 딥러닝 기반 논문들에 대해 Years 별로 목록를 만들어봄.
- Network Name은 저자가 특별히 칭한 경우에는 약어로, 그렇지 않은 경우에는 Full Name으로 표기함. 또한, 논문에서 따로 명시를 하지 않은 경우에는 실험에 사용된 Network Name으로 표기함.
- 논문들은 https://archive.org/ 를 기준으로 정리했으며, 제출년도가 동일한 논문들은 제출날짜 별로 따로 정렬하지 않았음. 또한, 논문이 여러 버전을 가지고 있는 경우에는 최초에 제출된 버전을 기준으로 제출년도를 기입함.
- archive로 검색이 안되는 논문들의 경우 검색 가능한 해당 논문의 Link를 기입함.
Years | Network Name | Title | Link |
1998 | LeNet-5 | Gradient-based learning applied to document recognition |
http://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf |
2012 | AlexNet | ImageNet Classification with Deep Convolutional Neural Networks | https://papers.nips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf |
2013 | ZFNet | Visualizing and Understanding Convolutional Networks |
https://arxiv.org/abs/1311.2901v3 |
2013 | NIN | Network In Network | https://arxiv.org/abs/1312.4400 |
2014 | VGGNet | Very Deep Convolutional Networks for Large-Scale Image Recognition |
https://arxiv.org/abs/1409.1556 |
2014 | GoogLeNet (Inception v1) |
Going deeper with convolutions | https://arxiv.org/abs/1409.4842 |
2015 | GoogLeNet (Inception v2~v3) |
Rethinking the inception architecture for computer vision |
https://arxiv.org/abs/1512.00567 |
2015 | ResNet | Deep residual learning for image recognition | https://arxiv.org/abs/1512.03385 |
2015 | pre-activation ResNet |
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification |
https://arxiv.org/abs/1502.01852 |
2016 | GoogLeNet (Inception v4, Inception-ResNet) |
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning |
https://arxiv.org/abs/1602.07261 |
2016 | WRN | Wide Residual Networks | https://arxiv.org/abs/1605.07146 |
2016 | SDR | Deep Networks with Stochastic Depth | https://arxiv.org/abs/1603.09382 |
2016 | RiR | Resnet in Resnet: Generalizing Residual Architectures |
https://arxiv.org/abs/1603.08029 |
2016 | SqueezeNet | SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size |
https://arxiv.org/abs/1602.07360 |
2016 | DenseNet | Densely Connected Convolutional Networks | https://arxiv.org/abs/1608.06993 |
2016 | Xception | Xception: Deep Learning with Depthwise Separable Convolutions |
https://arxiv.org/abs/1610.02357 |
2016 | ResNeXt | Aggregated Residual Transformations for Deep Neural Networks |
https://arxiv.org/abs/1611.05431 |
2016 | PolyNet | PolyNet: A Pursuit of Structural Diversity in Very Deep Networks |
https://arxiv.org/abs/1611.05725 |
2016 | PyramidNet | Deep Pyramidal Residual Networks | https://arxiv.org/abs/1610.02915v4 |
2016 | RoR | Residual Networks of Residual Networks: Multilevel Residual Networks |
https://arxiv.org/abs/1608.02908 |
2016 | FractalNet | FractalNet: Ultra-Deep Neural Networks without Residuals |
https://arxiv.org/abs/1605.07648 |
2016 | DMRNet | Deep Convolutional Neural Networks with Merge-and-Run Mappings |
https://arxiv.org/abs/1611.07718 |
2017 | ShuffleNet(v1) | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices |
https://arxiv.org/abs/1707.01083 |
2017 | IGCNets(IGCV1) | Interleaved Group Convolutions for Deep Neural Networks | https://arxiv.org/abs/1707.02725 |
2017 | MSDNet | Multi-Scale Dense Networks for Resource Efficient Image Classification |
https://arxiv.org/abs/1703.09844 |
2017 | PNASNet | Progressive Neural Architecture Search | https://arxiv.org/abs/1712.00559v3 |
2017 | Residual Attention Network |
Residual Attention Network for Image Classification |
https://arxiv.org/abs/1704.06904 |
2017 | DPN | Dual Path Networks | https://arxiv.org/abs/1707.01629 |
2017 | SENet | Squeeze-and-Excitation Networks | https://arxiv.org/abs/1709.01507 |
2017 | CondenseNet | CondenseNet: An Efficient DenseNet using Learned Group Convolutions |
https://arxiv.org/abs/1711.09224 |
2017 | NASNet | Learning Transferable Architectures for Scalable Image Recognition |
https://arxiv.org/abs/1707.07012 |
2017 | MobileNet v1 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications |
https://arxiv.org/abs/1704.04861 |
2018 | ShuffleNet(v2) | ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design |
https://arxiv.org/abs/1807.11164 |
2018 | AmoebaNet | Regularized Evolution for Image Classifier Architecture Search |
https://arxiv.org/abs/1802.01548v7 |
2018 | MnasNet | MnasNet: Platform-Aware Neural Architecture Search for Mobile |
https://arxiv.org/abs/1807.11626 |
2018 | IGCNets(IGCV2) | IGCV2: Interleaved Structured Sparse Convolutional Neural Networks |
https://arxiv.org/abs/1804.06202 |
2018 | IGCNets(IGCV3) | IGCV3: Interleaved Low-Rank Group Convolutions for Efficient Deep Neural Networks | https://arxiv.org/abs/1806.00178 |
2018 | MobileNet v2 | MobileNetV2: Inverted Residuals and Linear Bottlenecks |
https://arxiv.org/abs/1801.04381 |
2018 | Adversarial Inception v3, Ensemble Adversarial Inception ResNet v2 |
Adversarial Attacks and Defences Competition | https://arxiv.org/abs/1804.00097 |
2018 | Deep Layer Aggregation |
Deep Layer Aggregation | https://arxiv.org/abs/1707.06484 |
2018 | FBNet | FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search | https://arxiv.org/abs/1812.03443 |
2018 | Instagram ResNeXt WSL |
Exploring the Limits of Weakly Supervised Pretraining |
https://arxiv.org/abs/1805.00932 |
2018 | ResNet-D | Bag of Tricks for Image Classification with Convolutional Neural Networks |
https://arxiv.org/abs/1812.01187 |
2019 | SSL ResNet, SSL ResNeXT, SWSL ResNet, SWSL ResNeXt |
Billion-scale semi-supervised learning for image classification |
https://arxiv.org/abs/1905.00546 |
2019 | SPNASNet | Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours |
https://arxiv.org/abs/1904.02877 |
2019 | SKNets | Selective Kernel Networks | https://arxiv.org/abs/1903.06586 |
2019 | Res2Net, Res2NeXt | Res2Net: A New Multi-scale Backbone Architecture |
https://arxiv.org/abs/1904.01169 |
2019 | Noisy Student | Self-training with Noisy Student improves ImageNet classification |
https://arxiv.org/abs/1911.04252 |
2019 | MixNet | MixConv: Mixed Depthwise Convolutional Kernels | https://arxiv.org/abs/1907.09595 |
2019 | MobileNet v3 | Searching for MobileNetV3 | https://arxiv.org/abs/1905.02244 |
2019 | FishNet | FishNet: A Versatile Backbone for Image, Region, and Pixel Level Prediction | https://arxiv.org/abs/1901.03495 |
2019 | GhostNet | GhostNet: More Features from Cheap Operations | https://arxiv.org/abs/1911.11907 |
2019 | CSPNet | CSPNet: A New Backbone that can Enhance Learning Capability of CNN |
https://arxiv.org/abs/1911.11929 |
2019 | EfficientNet | EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks |
https://arxiv.org/abs/1905.11946 |
2019 | BiT | Big Transfer (BiT): General Visual Representation Learning |
https://arxiv.org/abs/1912.11370 |
2019 | ECA-Net | ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks |
https://arxiv.org/abs/1910.03151 |
2019 | VoVNet | An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection |
https://arxiv.org/abs/1904.09730 |
2019 | HRNet | High-Resolution Representations for Labeling Pixels and Regions |
https://arxiv.org/abs/1904.04514 |
2020 | RegNet | Designing Network Design Spaces | https://arxiv.org/abs/2003.13678 |
2020 | ResNeSt | ResNeSt: Split-Attention Networks | https://arxiv.org/abs/2004.08955 |
2020 | ReXNet | Rethinking Channel Dimensions for Efficient Model Design |
https://arxiv.org/abs/2007.00992 |
2020 | TResNet | TResNet: High Performance GPU-Dedicated Architecture |
https://arxiv.org/abs/2003.13630 |
2020 | iGPT | Generative Pretraining from Pixels | https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf |
2020 | ViT | An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale |
https://arxiv.org/abs/2010.11929 |
2020 | DeiT | Training data-efficient image transformers & distillation through attention |
https://arxiv.org/abs/2012.12877 |
2021 | MLP-Mixer | MLP-Mixer: An all-MLP Architecture for Vision | https://arxiv.org/abs/2105.01601 |
2021 | Swin Transformer | Swin Transformer: Hierarchical Vision Transformer using Shifted Windows | https://arxiv.org/abs/2103.14030 |
2021 | CSWin Transformer | CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows |
https://arxiv.org/abs/2107.00652 |
2021 | ViT_P, ViT_C | Early Convolutions Help Transformers See Better | https://arxiv.org/abs/2106.14881 |
2021 | CoAtNets | CoAtNet: Marrying Convolution and Attention for All Data Sizes | https://arxiv.org/abs/2106.04803 |
2021 | ViTs-SAM | When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations |
https://arxiv.org/abs/2106.01548 |
2021 | gMLP | Pay Attention to MLPs | https://arxiv.org/abs/2105.08050 |
2021 | RVT | Towards Robust Vision Transformer | https://arxiv.org/abs/2105.07926 |
2021 | DnC | Divide and Contrast: Self-supervised Learning from Uncurated Data |
https://arxiv.org/abs/2105.08054 |
2021 | PVT | Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions | https://arxiv.org/abs/2102.12122 |
2021 | PiT | Rethinking Spatial Dimensions of Vision Transformers |
https://arxiv.org/abs/2103.16302 |
'Others > Information' 카테고리의 다른 글
[Computer Vision Researchers] 내가 좋아하는 컴퓨터 비전 연구자들 (0) | 2020.12.17 |
---|---|
운전자 졸음 검출 기술 동향 및 개발 가능성 검토 (0) | 2020.06.04 |
The Inefficiency of C++, Fact or Fiction? (0) | 2020.06.01 |
Computer Vision 연구자들이 알아야 할 20개 이상의 techniques (0) | 2020.06.01 |