Seq2Seq, or Sequence To Sequence, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one LSTM, the encoder, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector representation (a context vector), and then to use another LSTM, the decoder, to extract the output sequence from that vector. The second LSTM is essentially a recurrent neural network language model except that it is conditioned on the input sequence.
(Note that this page refers to the original seq2seq not general sequence-to-sequence models)
Source: Sequence to Sequence Learning with Neural NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Decoder | 88 | 7.90% |
Sentence | 71 | 6.37% |
Machine Translation | 65 | 5.83% |
Translation | 61 | 5.48% |
Text Generation | 46 | 4.13% |
Language Modelling | 45 | 4.04% |
Semantic Parsing | 40 | 3.59% |
Question Answering | 25 | 2.24% |
Abstractive Text Summarization | 21 | 1.89% |