Novel dual gland GAN architecture improves human protein localization classification using salivary and pituitary gland inspired loss functions

Abstract Cellular classification is essential for understanding biological processes and disease mechanisms. This paper introduces a novel approach that employs two complementary loss functions within a Generative Adversarial Network (GAN) framework for processing images from the Human Protein Atlas...

Full description

Saved in:
Bibliographic Details
Main Authors: Hanaa Salem Marie, Moatasem M. Draz, Waleed Abd Elkhalik, Mostafa Elbaz
Format: Article
Language:English
Published: Nature Portfolio 2025-08-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-11254-w
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Cellular classification is essential for understanding biological processes and disease mechanisms. This paper introduces a novel approach that employs two complementary loss functions within a Generative Adversarial Network (GAN) framework for processing images from the Human Protein Atlas dataset. Our method introduces the “Salivary Gland” loss function (SG-Loss), which addresses missing pixel imputation through a unique computational mechanism that models the graded secretion patterns of acinar cells, incorporating multi-scale contextual information to reconstruct incomplete cellular features. This is paired with our innovative “Pituitary Gland” loss function (PG-Loss), which preserves structural integrity through a novel homeostatic regularization approach that adaptively weights pixel relationships based on subcellular compartment boundaries, unlike conventional smoothing techniques. The SG-Loss specifically targets discontinuities in protein expression patterns, while PG-Loss maintains biological plausibility by enforcing organelle-specific constraints learned from annotated training data. Our proposed Dual-Gland GAN demonstrates superior performance with an Inception Score of 9.83 (± 0.31) and MS-SSIM Diversity of 0.187 (± 0.021). The model achieves impressive precision and recall metrics (0.872 and 0.835, respectively), resulting in an F1-score of 0.853. Training stability is reflected in minimal generator and discriminator loss variance (0.028 and 0.032) with convergence achieved in 78 epochs. Comprehensive evaluation shows high quality and diversity scores (0.912 and 0.894), yielding a combined score of 0.903, demonstrating the effectiveness of our biologically inspired approach for cellular image generation and classification. The results also prove the efficiency of the architecture in enhancing the classification results.
ISSN:2045-2322