Improving the Performance of Answers Ranking in Q&A Communities: A Long-Text Matching Technique and Pre-Trained Model

This paper introduces TR-BERT, a novel method to answer ranking in Question & Answer (Q&A) communities, designed to tackle the widespread challenges of irrelevant popular answers and the neglect of new questions. TR-BERT integrates a long-text matching technique with a pre-trai...

Full description

Saved in:
Bibliographic Details
Main Authors: Siyu Sun, Yiming Wang, Jiale Cheng, Zhiying Xiao, Daqing Zheng, Xiaoling Hao
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10813354/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841550850977169408
author Siyu Sun
Yiming Wang
Jiale Cheng
Zhiying Xiao
Daqing Zheng
Xiaoling Hao
author_facet Siyu Sun
Yiming Wang
Jiale Cheng
Zhiying Xiao
Daqing Zheng
Xiaoling Hao
author_sort Siyu Sun
collection DOAJ
description This paper introduces TR-BERT, a novel method to answer ranking in Question & Answer (Q&A) communities, designed to tackle the widespread challenges of irrelevant popular answers and the neglect of new questions. TR-BERT integrates a long-text matching technique with a pre-trained language model. This ranking method effectively filters the noise and extracts textual features of questions and answers in QA communities. The experimental results on the Zhihu Q&A community dataset and the SemEval-2017 dataset showed the effectiveness and superiority of the TR-BERT. The contributions are as follows: Designing a new framework to process long-text data by filtering the noise and developing the TR-BERT to optimize the issue of answer ranking in the Q&A community. The experiment also showed that the TR-BERT model has the advantages of faster speed and requires less computational resources, which makes the TR-BERT valuable for practical applications. Meanwhile, TR-BERT offers an insight: By removing noise from the input text to shorten the length of the input sequence, we can decrease the time and computational resources required for model training and computation. This leads to the potential for smaller models, faster speeds, reduced computational resource demands, and improved efficiency.
format Article
id doaj-art-716e846e0c53417c98eb6defbf60a93e
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-716e846e0c53417c98eb6defbf60a93e2025-01-10T00:00:50ZengIEEEIEEE Access2169-35362025-01-01134188420010.1109/ACCESS.2024.352199910813354Improving the Performance of Answers Ranking in Q&A Communities: A Long-Text Matching Technique and Pre-Trained ModelSiyu Sun0Yiming Wang1https://orcid.org/0000-0002-6385-0198Jiale Cheng2Zhiying Xiao3https://orcid.org/0009-0001-9278-0563Daqing Zheng4https://orcid.org/0000-0002-4202-8074Xiaoling Hao5Shanghai University of Finance and Economics Zhejiang College, Jinhua, Zhejiang, ChinaSchool of Information Management Engineering, Shanghai University of Finance and Economics, Shanghai, ChinaSchool of Information Management Engineering, Shanghai University of Finance and Economics, Shanghai, ChinaShanghai University of Finance and Economics Zhejiang College, Jinhua, Zhejiang, ChinaSchool of Information Management Engineering, Shanghai University of Finance and Economics, Shanghai, ChinaSchool of Information Management Engineering, Shanghai University of Finance and Economics, Shanghai, ChinaThis paper introduces TR-BERT, a novel method to answer ranking in Question & Answer (Q&A) communities, designed to tackle the widespread challenges of irrelevant popular answers and the neglect of new questions. TR-BERT integrates a long-text matching technique with a pre-trained language model. This ranking method effectively filters the noise and extracts textual features of questions and answers in QA communities. The experimental results on the Zhihu Q&A community dataset and the SemEval-2017 dataset showed the effectiveness and superiority of the TR-BERT. The contributions are as follows: Designing a new framework to process long-text data by filtering the noise and developing the TR-BERT to optimize the issue of answer ranking in the Q&A community. The experiment also showed that the TR-BERT model has the advantages of faster speed and requires less computational resources, which makes the TR-BERT valuable for practical applications. Meanwhile, TR-BERT offers an insight: By removing noise from the input text to shorten the length of the input sequence, we can decrease the time and computational resources required for model training and computation. This leads to the potential for smaller models, faster speeds, reduced computational resource demands, and improved efficiency.https://ieeexplore.ieee.org/document/10813354/Q&A communityanswer rankingTR-BERTdeep neural network
spellingShingle Siyu Sun
Yiming Wang
Jiale Cheng
Zhiying Xiao
Daqing Zheng
Xiaoling Hao
Improving the Performance of Answers Ranking in Q&A Communities: A Long-Text Matching Technique and Pre-Trained Model
IEEE Access
Q&A community
answer ranking
TR-BERT
deep neural network
title Improving the Performance of Answers Ranking in Q&A Communities: A Long-Text Matching Technique and Pre-Trained Model
title_full Improving the Performance of Answers Ranking in Q&A Communities: A Long-Text Matching Technique and Pre-Trained Model
title_fullStr Improving the Performance of Answers Ranking in Q&A Communities: A Long-Text Matching Technique and Pre-Trained Model
title_full_unstemmed Improving the Performance of Answers Ranking in Q&A Communities: A Long-Text Matching Technique and Pre-Trained Model
title_short Improving the Performance of Answers Ranking in Q&A Communities: A Long-Text Matching Technique and Pre-Trained Model
title_sort improving the performance of answers ranking in q x0026 a communities a long text matching technique and pre trained model
topic Q&A community
answer ranking
TR-BERT
deep neural network
url https://ieeexplore.ieee.org/document/10813354/
work_keys_str_mv AT siyusun improvingtheperformanceofanswersrankinginqx0026acommunitiesalongtextmatchingtechniqueandpretrainedmodel
AT yimingwang improvingtheperformanceofanswersrankinginqx0026acommunitiesalongtextmatchingtechniqueandpretrainedmodel
AT jialecheng improvingtheperformanceofanswersrankinginqx0026acommunitiesalongtextmatchingtechniqueandpretrainedmodel
AT zhiyingxiao improvingtheperformanceofanswersrankinginqx0026acommunitiesalongtextmatchingtechniqueandpretrainedmodel
AT daqingzheng improvingtheperformanceofanswersrankinginqx0026acommunitiesalongtextmatchingtechniqueandpretrainedmodel
AT xiaolinghao improvingtheperformanceofanswersrankinginqx0026acommunitiesalongtextmatchingtechniqueandpretrainedmodel