Optimizing mRNA Vaccine Degradation Prediction via Penalized Dropout Approaches

Predicting mRNA vaccine degradation rates with precision is essential for ensuring stability, efficacy, and optimal deployment strategies, particularly given the unique challenges posed by their rapid degradation. This study introduces a comprehensive approach that integrates bioinformatic insights...

Full description

Saved in:
Bibliographic Details
Main Authors: Hwai Ing Soon, Azian Azamimi Abdullah, Hiromitsu Nishizaki, Latifah Munirah Kamarudin
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11107401/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Predicting mRNA vaccine degradation rates with precision is essential for ensuring stability, efficacy, and optimal deployment strategies, particularly given the unique challenges posed by their rapid degradation. This study introduces a comprehensive approach that integrates bioinformatic insights with advanced computational methodologies to address these challenges. A novel tetramer-label encoding approach (4-mer-lbA) was proposed, integrating biological relevance with data-driven analysis to enhance predictive accuracy. To further optimize model performance, two advanced hyperparameter optimization (HPO) techniques—Dropout-Enhanced Technique (DEet) and Hyperparameter Optimization Algorithm Penalizer (HOPeR)—are proposed to mitigate overfitting, address inefficiencies in conventional HPO algorithms (HPOAs), and accelerate model convergence. The methodologies were validated on mRNA degradation datasets using deep neural network (DNN) architectures, with particular attention to the comparative performance of sequential and wrapped architectural designs. Results demonstrate that sequential architectures outperform wrapped models by reducing overfitting and computational demands. The integration of DEet and HOPeR further optimized hyperparameter exploration, with DEet enhancing model robustness through dropout regularization and HOPeR introducing adaptive penalties to systematically eliminate suboptimal configurations. The experimental outcomes highlight significant advancements in convergence rates and error reduction, particularly in complex models like 3-layer-wrapped Bidirectional Long Short-Term Memory (3wBiLSTM). By the 100th epoch, training and validation losses reached 0.0023 and 0.0029, respectively, indicating a substantial improvement over baseline models. These methodologies extend beyond mRNA vaccine research, demonstrating versatility across diverse machine learning domains. By addressing critical challenges in HPO and predictive modeling, the study offers scalable and robust solutions for advancing biotechnology and interdisciplinary research.
ISSN:2169-3536