A Bichannel Transformer with Context Encoding for Document-Driven Conversation Generation in Social Media
Along with the development of social media on the internet, dialogue systems are becoming more and more intelligent to meet users’ needs for communication, emotion, and social intercourse. Previous studies usually use sequence-to-sequence learning with recurrent neural networks for response generati...
Saved in:
Main Authors: | Yuanyuan Cai, Min Zuo, Qingchuan Zhang, Haitao Xiong, Ke Li |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2020-01-01
|
Series: | Complexity |
Online Access: | http://dx.doi.org/10.1155/2020/3710104 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Any-to-any voice conversion using representation separation auto-encoder
by: Zhihua JIAN, et al.
Published: (2024-02-01) -
Pseudo-random number generation with -encoders
by: Charlene Kalle, et al.
Published: (2024-12-01) -
Efficient Structured Prediction with Transformer Encoders
by: Ali Basirat
Published: (2024-12-01) -
Chinese NER based on improved Transformer encoder
by: Honghao ZHENG, et al.
Published: (2021-10-01) -
Transform coding of DCT coefficients in video encoder
by: WANG Zhong-yuan, et al.
Published: (2008-01-01)