-
Also seq2seq, Encoder-Decoder, and Sequence transformation model.
-
1409.3215 Sequence to Sequence Learning with Neural Networks(2014)
- If we use the self-caution mechanism, isn’t there much need to use the Encoder-Decoder configuration? I’m starting to get the feeling that it is (2018-10-17)
-
- 1506.03134 Pointer Networks(2015)
- It is possible to use the pointer to copy from the input
- CopyNet
- Pointer Sentinel Mixture Models
- Pointer-Generator Network (2017)
This page is auto-translated from /nishio/Sequence-to-Sequenceモデル using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.