Showing 61 - 80 results of 2,540 for search 'berht~', query time: 3.69s Refine Results
  1. 61
  2. 62

    A ANALISIS SENTIMEN PARA KANDIDAT PILPRES 2024 DENGAN MODEL BAHASA BERT by LUIS RICARDO PANDIANGAN, IGN LANANG WIJAYAKUSUMA

    Published 2024-11-01
    “…This research also generated keywords for each sentiment towards candidates from BERT’s validation data predictions. This study contributes insights into the narratives and sentiments surrounding the 2024 presidential election.…”
    Get full text
    Article
  3. 63

    Research on entity recognition and alignment of APT attack based on Bert and BiLSTM-CRF by Xiuzhang YANG, Guojun PENG, Zichuan LI, Yangqi LYU, Side LIU, Chenguang LI

    Published 2022-06-01
    “…Moreover, the proposed model has the best prediction effect on the "attack method" entity category, whose F1-score is 0.927 5. …”
    Get full text
    Article
  4. 64

    ABioNER: A BERT-Based Model for Arabic Biomedical Named-Entity Recognition by Nada Boudjellal, Huaping Zhang, Asif Khan, Arshad Ahmad, Rashid Naseem, Jianyun Shang, Lin Dai

    Published 2021-01-01
    “…The model performance was compared with two state-of-the-art models (namely, AraBERT and multilingual BERT cased), and it outperformed both models with 85% F1-score.…”
    Get full text
    Article
  5. 65
  6. 66
  7. 67
  8. 68

    Hubungan Kadar LDL dan HDL Serum Ibu Hamil Aterm dengan Berat Lahir Bayi by Oktalia Sabrida, Hariadi ., Eny Yantri

    Published 2014-09-01
    “…Rerata kadar HDL serum ibu hamil aterm 53,32±17,39 mg/dl dengan 13 sampel (41,90%) kadar HDL <48 mg/dl. Rerata berat lahir bayi 3150,00±489,89 gram dengan 2 sampel (6,50%) memiliki bayi dengan berat<2500 gram. …”
    Get full text
    Article
  9. 69

    Pengaruh Grit terhadap Self-Determination pada Atlet yang Memutuskan Kembali Pasca Cedera Berat by Azzah Ambarani Hidayat, Afif Kurniawan

    Published 2021-08-01
    “…Partisipan dalam penelitian ini merupakan atlet yang memutuskan kembali pasca cedera berat. Metode yang digunakan dalam penelitian ini merupakan kuantitatif eksplanatori. …”
    Get full text
    Article
  10. 70
  11. 71
  12. 72
  13. 73

    Faith in the modern Reformed church: Calvin and Barth

    Published 2022-12-01
    “… Calvin and Barth are arguably the main exponents of two notable soteriological camps in the Reformed world nowadays and their soteriology has wide and sometimes unarticulated impacts on Reformed doctrine and praxis. …”
    Get full text
    Article
  14. 74
  15. 75
  16. 76
  17. 77

    Tackling misinformation in mobile social networks a BERT-LSTM approach for enhancing digital literacy by Jun Wang, Xiulai Wang, Airong Yu

    Published 2025-01-01
    “…Extensive evaluations revealed that the BERT-LSTM model achieved an accuracy of 93.51%, a recall of 91.96%, and an F1 score of 92.73% in identifying misinformation. …”
    Get full text
    Article
  18. 78

    Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework by He Jinliang, Mei Aohan

    Published 2025-01-01
    “…The study developed a comprehensive dataset sourced from diverse online platforms, supplemented by non-humorous content from scientific literature and press conferences to enhance the model's discriminative capabilities. Utilizing DistilBERT for efficient evaluation, the fine-tuned LLaMA-3 achieved an impressive accuracy of 95.6% and an F1-score of 97.75%, surpassing larger models such as GPT-4o, and Gemini. …”
    Get full text
    Article
  19. 79

    A BERT-BiGRU-CRF Model for Entity Recognition of Chinese Electronic Medical Records by Qiuli Qin, Shuang Zhao, Chunmei Liu

    Published 2021-01-01
    “…The BERT layer first converts the electronic medical record text into a low-dimensional vector, then uses this vector as the input to the BiGRU layer to capture contextual features, and finally uses conditional random fields (CRFs) to capture the dependency between adjacent tags. …”
    Get full text
    Article
  20. 80

    Analisis Perbandingan Model Bert Dan Xlnet Untuk Klasifikasi Tweet Bully Pada Twitter by Teuku Radillah, Okta Veza, Sarjon Defit

    Published 2024-12-01
    “…Penelitian ini bertujuan untuk membandingkan performa dua model pemrosesan bahasa alami terbaru, yaitu BERT (Bidirectional Encoder Representations from Transformers) dan XLNet, dalam klasifikasi tweet yang mengandung bullying. …”
    Get full text
    Article