Papers Read on AI

Papers Read on AI header image 1
May 9, 2022  

Better plain ViT baselines for ImageNet-1k

It is commonly accepted that the Vision Transformer model requires sophisticated regularization techniques to excel at ImageNet-1k scale data. Surprisingly, we find this is not the case and standard data augmentation is sufficient. This note presents a few minor modifications to the original Vision Transformer (ViT) vanilla training setting that dramatically improve the performance of plain ViT models. Notably, 90 epochs of training surpass 76% top-1 accuracy in under seven hours on a TPUv3-8, similar to the classic ResNet50 baseline, and 300 epochs of training reach 80% in less than one day.

2022: L. Beyer, Xiaohua Zhai, Alexander Kolesnikov

https://arxiv.org/pdf/2205.01580v1.pdf

May 9, 2022  

OPT: Open Pre-trained Transformer Language Models

Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, 1 while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.

2022: Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer

Ranked #1 on Stereotypical Bias Analysis on CrowS-Pairs

https://arxiv.org/pdf/2205.01068v3.pdf

May 6, 2022  

Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems

In this paper, we provide an in-depth analysis of how to tackle high cardinality categorical features with the quantile. Our proposal outperforms state-of-the-art encoders, including the traditional statistical mean target encoder, when considering the Mean Absolute Error, especially in the presence of long tailed or skewed distributions.

2021: Carlos Mougán, D. Masip, Jordi Nin, O. Pujol

https://arxiv.org/pdf/2105.13783v2.pdf

May 5, 2022  

LM-Debugger: An Interactive Tool for Inspection and Intervention in Transformer-Based Language Models

In this work, we introduce LM-Debugger, an interactive debugger tool for transformer-based LMs, which provides a fine-grained interpretation of the model’s internal prediction process, as well as a powerful framework for intervening in LM behavior. For its backbone, LM-Debugger relies on a recent method that interprets the inner token representations and their updates by the feed-forward layers in the vocabulary space.

2022: Mor Geva, Avi Caciularu, Guy Dar, Paul Roit, Shoval Sadde, Micah Shlain, Bar Tamir, Yoav Goldberg

https://arxiv.org/pdf/2204.12130v1.pdf

May 3, 2022  

Patches Are All You Need?

We propose the ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on patches as input, separates the mixing of spatial and channel dimensions, and maintains equal size and resolution throughout the network. ConvMixer outperforms the ViT, MLP-Mixer, and some of their variants for similar parameter counts and data set sizes, in addition to outperforming classical vision models such as the ResNet.

2022: Asher Trockman, J. Z. Kolter

Ranked #80 on Image Classification on CIFAR-10

https://arxiv.org/pdf/2201.09792v1.pdf

May 2, 2022  

OneFlow: Redesign the Distributed Deep Learning Framework from Scratch

Aiming at a simple, neat redesign of distributed deep learning frameworks for various parallelism paradigms, we present OneFlow , a novel distributed training framework based on an SBP ( split , broadcast and partial-value ) abstraction and the actor model. SBP enables much easier programming of data parallelism and model parallelism than existing frameworks, and the actor model provides a succinct runtime mechanism to manage the complex dependencies imposed by resource constraints, data movement and computation in distributed deep learning.

2021: J. Yuan, Xinqi Li, Cheng Cheng, Juncheng Liu, Ran Guo, Shenghang Cai, Chi Yao, Fei Yang, Xiaodong Yi, Chuan Wu, Haoran Zhang, Jie Zhao

https://arxiv.org/pdf/2110.15032v6.pdf

April 30, 2022  

Understanding The Robustness in Vision Transformers

Recent studies show that Vision Transformers (ViTs) exhibit strong robustness against various corruptions. Although this property is partly attributed to the self-attention mechanism, there is still a lack of systematic understanding. In this paper, we examine the role of self-attention in learning robust representations. Our study is motivated by the intriguing properties of the emerging visual grouping in Vision Transformers, which indicates that self-attention may promote robustness through improved mid-level representations. We further propose a family of fully attentional networks (FANs) that strengthen this capability by incorporating an attentional channel processing design. We validate the design com-prehensively on various hierarchical backbones.

2022: Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, Anima Anandkumar, Jiashi Feng, J. Álvarez

Ranked #2 on Domain Generalization on ImageNet-C (using extra training data)

https://arxiv.org/pdf/2204.12451v2.pdf

April 29, 2022  

PP-Matting: High-Accuracy Natural Image Matting

Recently, deep learning based approaches have achieved great improvements in image matting. However, most of them require a user supplied trimap as an auxiliary input, which limits the matting applications in the real world. Our method applies a high-resolution detail branch (HRDB) that extracts fine-grained details of the foreground with keeping feature resolution unchanged.

2022: Guowei Chen, Yi Liu, Jian Wang, Juncai Peng, Yuying Hao, Lutao Chu, Shiyu Tang, Zewu Wu, Zeyu Chen, Zhiliang Yu, Yuning Du, Qingqing Dang, Xiaoguang Hu, Dianhai Yu

https://arxiv.org/pdf/2204.09433v1.pdf

April 12, 2022  

LinkBERT: Pretraining Language Models with Document Links

Language model (LM) pretraining can learn various knowledge from text corpora, helping downstream tasks. However, existing methods such as BERT model a single document, and do not capture dependencies or knowledge that span across documents. In this work, we propose LinkBERT, an LM pretraining method that leverages links between documents, e.g., hyperlinks.

2022: Michihiro Yasunaga, J. Leskovec, Percy Liang

Ranked #1 on Text Classification on BLURB

https://arxiv.org/pdf/2203.15827v1.pdf

April 10, 2022  

Deep Learning Methods for Improved Decoding of Linear Codes

The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required.

2017: Eliya Nachmani, Elad Marciano, Loren Lugosch, W. Gross, D. Burshtein, Yair Be’ery

Deep learning, Belief propagation, Algorithm, Artificial neural network, Recurrent neural network, Quantization (signal processing), Tanner graph, BCH code, Software propagation, Yet another, Signal-to-noise ratio, Linear code, Network architecture, Parity bit, Block code, End-to-end principle, Iteration, Maxima and minima, Random neural network, Computational complexity theory, Linear programming relaxation

https://arxiv.org/pdf/1706.07043v2.pdf

Podbean App

Play this podcast on Podbean App