Showing 45,161 - 45,180 results of 54,500 for search '((((studenter* OR students*) OR studentser*) OR studentser*) OR student*)', query time: 4.93s Refine Results
  1. 45161

    ZZ-YOLOv11: A Lightweight Vehicle Detection Model Based on Improved YOLOv11 by Zhe Zhang, Zhongyang Zhang, Gang Li, Chenxi Xia

    Published 2025-05-01
    “…Finally, in order to ensure that the detection accuracy of the pruned model will not be too low, a model distillation method was used, in which YOLOv11x + LDCD was used as the teacher model and the pruned model was distilled as the student model. Experimental data on the optimized KITTI and BDD100K datasets show that the detection accuracy of the ZZ-YOLO algorithm is 70.9%, the mAP (mean Average Precision) @0.5 is 58%, the model-parameter quantity is 14.1GFLOPs compared to the original algorithm, the detection accuracy is increased by 5.7%, and the average precision is increased by 2.3%. …”
    Get full text
    Article
  2. 45162
  3. 45163
  4. 45164

    Validação do teste aeróbio de corrida de vai-e-vem de 20 metros para escolares do gênero masculino de 13 e 14 anos by Sandro Márcio Ströher, Edilson Hobold, Lírio Levandoski Junior, Jardel Schlickmann

    Published 2009-03-01
    “…As associações entre as variáveis de aptidão cardiorrespiratória foram realizadas por meio da correlação linear de Pearson (r) e a comparação de médias pelo teste “t” de Student para amostras independentes. Os resultados sugerem uma correlação alta significativa (r=0,78; p<0,01) para a amostra, apresentando uma correlação alta de significância nas idades de 13 anos (r=0,72; p<0,05) e 14 anos (r=0,83; p<0,05). …”
    Get full text
    Article
  5. 45165

    Larger models yield better results? Streamlined severity classification of ADHD-related concerns using BERT-based knowledge distillation. by Ahmed Akib Jawad Karim, Kazi Hafiz Md Asad, Md Golam Rabiul Alam

    Published 2025-01-01
    “…Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. …”
    Get full text
    Article
  6. 45166

    M3AE-Distill: An Efficient Distilled Model for Medical Vision–Language Downstream Tasks by Xudong Liang, Jiang Xie, Mengfei Zhang, Zhuo Bi

    Published 2025-07-01
    “…During pre-training, two key strategies are developed: (1) both hidden state and attention map distillation are employed to guide the student model, and (2) an attention-guided masking strategy is designed to enhance fine-grained image–text alignment. …”
    Get full text
    Article
  7. 45167
  8. 45168
  9. 45169
  10. 45170
  11. 45171
  12. 45172

    Exploring temporal variations in anxiety in multilingual English teachers: an idiodynamic approach by Ramazan Yetkin, Zekiye Özer-Altınkaya

    Published 2025-04-01
    “…Factors such as problems with instruction, student distractions, and classroom management contributed to anxiety fluctuations, whereas experience, emotion regulation strategies, and active student participation helped stabilize anxiety. …”
    Get full text
    Article
  13. 45173
  14. 45174
  15. 45175
  16. 45176
  17. 45177
  18. 45178
  19. 45179
  20. 45180

    The Role of Indian English Drama in Learning Second Language Teaching (English) by Jamirul Islam

    Published 2021-06-01
    “… The aim and objective of this study are to research the role of drama performance in university, especially in teaching within the literature and non-literature student of B. A course of Maulana Azad National Urdu University, Hyderabad. …”
    Get full text
    Article