An Experimental Study on Improved Sequence-to-Sequence Model in Machine Translation

This paper presents the N-Seq2Seq model for enhancing machine translation quality and efficiency. The core innovations include streamlined attention mechanisms for focusing on crucial details, word-level tokenization to preserve meaning, text candidate frames for prediction acceleration, and relativ...

Full description

Saved in:
Bibliographic Details
Main Authors: Yuan-shuai Lan, Chuan Li, Xueqin Meng, Tao Zheng, Mincong Tang
Format: Article
Language:English
Published: Faculty of Mechanical Engineering in Slavonski Brod, Faculty of Electrical Engineering in Osijek, Faculty of Civil Engineering in Osijek 2025-01-01
Series:Tehnički Vjesnik
Subjects:
Online Access:https://hrcak.srce.hr/file/478020
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents the N-Seq2Seq model for enhancing machine translation quality and efficiency. The core innovations include streamlined attention mechanisms for focusing on crucial details, word-level tokenization to preserve meaning, text candidate frames for prediction acceleration, and relative positional encoding reinforcing word associations. Comparative analyses on English-Chinese datasets demonstrate approximately 4 BLEU score improvements over baseline Seq2Seq and 2 BLEU gains over Transformer models. Moreover, the N-Seq2Seq model reduces average inference time by 60% and 43% respectively. These techniques improve contextual modeling, reduce non-essential information, and accelerate reasoning. Importantly, the model achieves higher accuracy with low overhead, making it possible to deploy on mobile applications, while the Chinese-centric design can also be quickly adapted to other languages.
ISSN:1330-3651
1848-6339