Task-Specific Transformer-Based Language Models in Health Care: Scoping Review
BackgroundTransformer-based language models have shown great potential to revolutionize health care by advancing clinical decision support, patient interaction, and disease prediction. However, despite their rapid development, the implementation of transformer-based language...
Saved in:
| Main Authors: | , , , , , , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
JMIR Publications
2024-11-01
|
| Series: | JMIR Medical Informatics |
| Online Access: | https://medinform.jmir.org/2024/1/e49724 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846163954252906496 |
|---|---|
| author | Ha Na Cho Tae Joon Jun Young-Hak Kim Heejun Kang Imjin Ahn Hansle Gwon Yunha Kim Jiahn Seo Heejung Choi Minkyoung Kim Jiye Han Gaeun Kee Seohyun Park Soyoung Ko |
| author_facet | Ha Na Cho Tae Joon Jun Young-Hak Kim Heejun Kang Imjin Ahn Hansle Gwon Yunha Kim Jiahn Seo Heejung Choi Minkyoung Kim Jiye Han Gaeun Kee Seohyun Park Soyoung Ko |
| author_sort | Ha Na Cho |
| collection | DOAJ |
| description |
BackgroundTransformer-based language models have shown great potential to revolutionize health care by advancing clinical decision support, patient interaction, and disease prediction. However, despite their rapid development, the implementation of transformer-based language models in health care settings remains limited. This is partly due to the lack of a comprehensive review, which hinders a systematic understanding of their applications and limitations. Without clear guidelines and consolidated information, both researchers and physicians face difficulties in using these models effectively, resulting in inefficient research efforts and slow integration into clinical workflows.
ObjectiveThis scoping review addresses this gap by examining studies on medical transformer-based language models and categorizing them into 6 tasks: dialogue generation, question answering, summarization, text classification, sentiment analysis, and named entity recognition.
MethodsWe conducted a scoping review following the Cochrane scoping review protocol. A comprehensive literature search was performed across databases, including Google Scholar and PubMed, covering publications from January 2017 to September 2024. Studies involving transformer-derived models in medical tasks were included. Data were categorized into 6 key tasks.
ResultsOur key findings revealed both advancements and critical challenges in applying transformer-based models to health care tasks. For example, models like MedPIR involving dialogue generation show promise but face privacy and ethical concerns, while question-answering models like BioBERT improve accuracy but struggle with the complexity of medical terminology. The BioBERTSum summarization model aids clinicians by condensing medical texts but needs better handling of long sequences.
ConclusionsThis review attempted to provide a consolidated understanding of the role of transformer-based language models in health care and to guide future research directions. By addressing current challenges and exploring the potential for real-world applications, we envision significant improvements in health care informatics. Addressing the identified challenges and implementing proposed solutions can enable transformer-based language models to significantly improve health care delivery and patient outcomes. Our review provides valuable insights for future research and practical applications, setting the stage for transformative advancements in medical informatics. |
| format | Article |
| id | doaj-art-02a024cf23a74934b5889575b88b43a7 |
| institution | Kabale University |
| issn | 2291-9694 |
| language | English |
| publishDate | 2024-11-01 |
| publisher | JMIR Publications |
| record_format | Article |
| series | JMIR Medical Informatics |
| spelling | doaj-art-02a024cf23a74934b5889575b88b43a72024-11-18T17:30:42ZengJMIR PublicationsJMIR Medical Informatics2291-96942024-11-0112e4972410.2196/49724Task-Specific Transformer-Based Language Models in Health Care: Scoping ReviewHa Na Chohttps://orcid.org/0000-0001-8033-6644Tae Joon Junhttps://orcid.org/0000-0002-6808-5149Young-Hak Kimhttps://orcid.org/0000-0002-3610-486XHeejun Kanghttps://orcid.org/0000-0002-0396-2112Imjin Ahnhttps://orcid.org/0000-0003-3929-6390Hansle Gwonhttps://orcid.org/0000-0001-6019-4466Yunha Kimhttps://orcid.org/0000-0001-6713-1900Jiahn Seohttps://orcid.org/0000-0002-3589-1347Heejung Choihttps://orcid.org/0000-0003-2265-1819Minkyoung Kimhttps://orcid.org/0000-0003-4923-5321Jiye Hanhttps://orcid.org/0000-0002-1366-8275Gaeun Keehttps://orcid.org/0000-0002-2377-3503Seohyun Parkhttps://orcid.org/0000-0003-2658-8757Soyoung Kohttps://orcid.org/0009-0003-6268-1534 BackgroundTransformer-based language models have shown great potential to revolutionize health care by advancing clinical decision support, patient interaction, and disease prediction. However, despite their rapid development, the implementation of transformer-based language models in health care settings remains limited. This is partly due to the lack of a comprehensive review, which hinders a systematic understanding of their applications and limitations. Without clear guidelines and consolidated information, both researchers and physicians face difficulties in using these models effectively, resulting in inefficient research efforts and slow integration into clinical workflows. ObjectiveThis scoping review addresses this gap by examining studies on medical transformer-based language models and categorizing them into 6 tasks: dialogue generation, question answering, summarization, text classification, sentiment analysis, and named entity recognition. MethodsWe conducted a scoping review following the Cochrane scoping review protocol. A comprehensive literature search was performed across databases, including Google Scholar and PubMed, covering publications from January 2017 to September 2024. Studies involving transformer-derived models in medical tasks were included. Data were categorized into 6 key tasks. ResultsOur key findings revealed both advancements and critical challenges in applying transformer-based models to health care tasks. For example, models like MedPIR involving dialogue generation show promise but face privacy and ethical concerns, while question-answering models like BioBERT improve accuracy but struggle with the complexity of medical terminology. The BioBERTSum summarization model aids clinicians by condensing medical texts but needs better handling of long sequences. ConclusionsThis review attempted to provide a consolidated understanding of the role of transformer-based language models in health care and to guide future research directions. By addressing current challenges and exploring the potential for real-world applications, we envision significant improvements in health care informatics. Addressing the identified challenges and implementing proposed solutions can enable transformer-based language models to significantly improve health care delivery and patient outcomes. Our review provides valuable insights for future research and practical applications, setting the stage for transformative advancements in medical informatics.https://medinform.jmir.org/2024/1/e49724 |
| spellingShingle | Ha Na Cho Tae Joon Jun Young-Hak Kim Heejun Kang Imjin Ahn Hansle Gwon Yunha Kim Jiahn Seo Heejung Choi Minkyoung Kim Jiye Han Gaeun Kee Seohyun Park Soyoung Ko Task-Specific Transformer-Based Language Models in Health Care: Scoping Review JMIR Medical Informatics |
| title | Task-Specific Transformer-Based Language Models in Health Care: Scoping Review |
| title_full | Task-Specific Transformer-Based Language Models in Health Care: Scoping Review |
| title_fullStr | Task-Specific Transformer-Based Language Models in Health Care: Scoping Review |
| title_full_unstemmed | Task-Specific Transformer-Based Language Models in Health Care: Scoping Review |
| title_short | Task-Specific Transformer-Based Language Models in Health Care: Scoping Review |
| title_sort | task specific transformer based language models in health care scoping review |
| url | https://medinform.jmir.org/2024/1/e49724 |
| work_keys_str_mv | AT hanacho taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT taejoonjun taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT younghakkim taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT heejunkang taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT imjinahn taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT hanslegwon taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT yunhakim taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT jiahnseo taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT heejungchoi taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT minkyoungkim taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT jiyehan taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT gaeunkee taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT seohyunpark taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview AT soyoungko taskspecifictransformerbasedlanguagemodelsinhealthcarescopingreview |