Decoder
3527 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Decoder
Libraries
Use these libraries to find Decoder models and implementationsMost implemented papers
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.
Neural Machine Translation by Jointly Learning to Align and Translate
Neural machine translation is a recently proposed approach to machine translation.
Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information.
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures.
Searching for MobileNetV3
We achieve new state of the art results for mobile classification, detection and segmentation.
Masked Autoencoders Are Scalable Vision Learners
Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.
Neural Discrete Representation Learning
Learning useful representations without supervision remains a key challenge in machine learning.
High Quality Monocular Depth Estimation via Transfer Learning
Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction.
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN).
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.