LNLF-BERT: Transformer for Long Document Classification With Multiple Attention Levels
Transformer-based models, such as Bidirectional Encoder Representations from Transformers (BERT), cannot process long sequences because their self-attention operation scales quadratically with the sequence length. To remedy this, we introduce the Look Near and Look Far BERT (LNLF-BERT) with a two-le...
Saved in:
| Main Authors: | Linh Manh Pham, Hoang Cao the |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10744540/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Classification Performance Comparison of BERT and IndoBERT on SelfReport of COVID-19 Status on Social Media
by: Irwan Budiman, et al.
Published: (2024-03-01) -
Improving Text Recognition Accuracy for Serbian Legal Documents Using BERT
by: Miloš Bogdanović, et al.
Published: (2025-01-01) -
Enhancing Abstractive Multi-Document Summarization with Bert2Bert Model for Indonesian Language
by: Aldi Fahluzi Muharam, et al.
Published: (2025-01-01) -
A BERT-Based Classification Model: The Case of Russian Fairy Tales
by: Валерий Дмитриевич Соловьев, et al.
Published: (2024-12-01) -
Exploring Named Entity Recognition via MacBERT-BiGRU and Global Pointer with Self-Attention
by: Chengzhe Yuan, et al.
Published: (2024-12-01)