UltraLightSqueezeNet: A Deep Learning Architecture for Malaria Classification With Up to 54× Fewer Trainable Parameters for Resource Constrained Devices
Lightweight deep learning approaches for malaria detection have gained attention for their potential to enhance diagnostics in resource constrained environments. For our study, we selected SqueezeNet1.1 as it is one of the most popular lightweight architectures. SqueezeNet1.1 is a later version of S...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11007680/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849325378729934848 |
|---|---|
| author | Suresh Babu Nettur Shanthi Karpurapu Unnati Nettur Likhit Sagar Gajja Sravanthy Myneni Akhil Dusi Lalithya Posham |
| author_facet | Suresh Babu Nettur Shanthi Karpurapu Unnati Nettur Likhit Sagar Gajja Sravanthy Myneni Akhil Dusi Lalithya Posham |
| author_sort | Suresh Babu Nettur |
| collection | DOAJ |
| description | Lightweight deep learning approaches for malaria detection have gained attention for their potential to enhance diagnostics in resource constrained environments. For our study, we selected SqueezeNet1.1 as it is one of the most popular lightweight architectures. SqueezeNet1.1 is a later version of SqueezeNet1.0 and is 2.4 times more computationally efficient than the original model. We proposed and implemented three ultra-lightweight architecture variants to SqueezeNet1.1 architecture, namely Variant 1 (one fire module), Variant 2 (two fire modules), and Variant 3 (four fire modules), which are even more compact than SqueezeNetV1.1 (eight fire modules). These models were implemented to evaluate the best performing variant that achieves superior computational efficiency without sacrificing accuracy in malaria blood cell classification. The models were trained and evaluated using the NIH Malaria dataset. We assessed each model’s performance based on metrics including accuracy, recall, precision, F1-score, and Area Under the Curve (AUC). The results show that the SqueezeNet1.1 model achieves the highest performance across all metrics, with a classification accuracy of 97.12%. Variant 3 (four fire modules) offers a competitive alternative, delivering almost identical results (accuracy 96.55%) with a <inline-formula> <tex-math notation="LaTeX">$6\times $ </tex-math></inline-formula> reduction in computational overhead compared to SqueezeNet1.1. Variant 2 and Variant 1 perform slightly lower than Variant 3, with Variant 2 (two fire modules) reducing computational overhead by <inline-formula> <tex-math notation="LaTeX">$28\times $ </tex-math></inline-formula> and Variant 1 (one fire module) achieving a <inline-formula> <tex-math notation="LaTeX">$54\times $ </tex-math></inline-formula> reduction in trainable parameters compared to SqueezeNet1.1. These findings demonstrate that our SqueezeNet1.1 architecture variants provide a flexible approach to malaria detection, enabling the selection of a variant that balances resource constraints and performance. |
| format | Article |
| id | doaj-art-edd3ea0058d44f2c98adb56e56a6817b |
| institution | Kabale University |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-edd3ea0058d44f2c98adb56e56a6817b2025-08-20T03:48:26ZengIEEEIEEE Access2169-35362025-01-0113894288944010.1109/ACCESS.2025.357169611007680UltraLightSqueezeNet: A Deep Learning Architecture for Malaria Classification With Up to 54× Fewer Trainable Parameters for Resource Constrained DevicesSuresh Babu Nettur0https://orcid.org/0009-0007-7453-7870Shanthi Karpurapu1https://orcid.org/0009-0004-0957-2010Unnati Nettur2Likhit Sagar Gajja3Sravanthy Myneni4Akhil Dusi5Lalithya Posham6Independent Researcher, Virginia Beach, VA, USAIndependent Researcher, Virginia Beach, VA, USADepartment of Computer Science, Virginia Tech, Blacksburg, VA, USADepartment of Computer Science, BML Munjal University, Kapriwas, Haryana, IndiaIndependent Researcher, Virginia Beach, VA, USADepartment of Information Systems, Indiana Tech, Fort Wayne, IN, USASchool of International Education, Nanjing Medical University, Nanjing, Jiangsu, ChinaLightweight deep learning approaches for malaria detection have gained attention for their potential to enhance diagnostics in resource constrained environments. For our study, we selected SqueezeNet1.1 as it is one of the most popular lightweight architectures. SqueezeNet1.1 is a later version of SqueezeNet1.0 and is 2.4 times more computationally efficient than the original model. We proposed and implemented three ultra-lightweight architecture variants to SqueezeNet1.1 architecture, namely Variant 1 (one fire module), Variant 2 (two fire modules), and Variant 3 (four fire modules), which are even more compact than SqueezeNetV1.1 (eight fire modules). These models were implemented to evaluate the best performing variant that achieves superior computational efficiency without sacrificing accuracy in malaria blood cell classification. The models were trained and evaluated using the NIH Malaria dataset. We assessed each model’s performance based on metrics including accuracy, recall, precision, F1-score, and Area Under the Curve (AUC). The results show that the SqueezeNet1.1 model achieves the highest performance across all metrics, with a classification accuracy of 97.12%. Variant 3 (four fire modules) offers a competitive alternative, delivering almost identical results (accuracy 96.55%) with a <inline-formula> <tex-math notation="LaTeX">$6\times $ </tex-math></inline-formula> reduction in computational overhead compared to SqueezeNet1.1. Variant 2 and Variant 1 perform slightly lower than Variant 3, with Variant 2 (two fire modules) reducing computational overhead by <inline-formula> <tex-math notation="LaTeX">$28\times $ </tex-math></inline-formula> and Variant 1 (one fire module) achieving a <inline-formula> <tex-math notation="LaTeX">$54\times $ </tex-math></inline-formula> reduction in trainable parameters compared to SqueezeNet1.1. These findings demonstrate that our SqueezeNet1.1 architecture variants provide a flexible approach to malaria detection, enabling the selection of a variant that balances resource constraints and performance.https://ieeexplore.ieee.org/document/11007680/Malaria NIH datasetlightweightcompactSqueezeNetfire moduleAlexNet |
| spellingShingle | Suresh Babu Nettur Shanthi Karpurapu Unnati Nettur Likhit Sagar Gajja Sravanthy Myneni Akhil Dusi Lalithya Posham UltraLightSqueezeNet: A Deep Learning Architecture for Malaria Classification With Up to 54× Fewer Trainable Parameters for Resource Constrained Devices IEEE Access Malaria NIH dataset lightweight compact SqueezeNet fire module AlexNet |
| title | UltraLightSqueezeNet: A Deep Learning Architecture for Malaria Classification With Up to 54× Fewer Trainable Parameters for Resource Constrained Devices |
| title_full | UltraLightSqueezeNet: A Deep Learning Architecture for Malaria Classification With Up to 54× Fewer Trainable Parameters for Resource Constrained Devices |
| title_fullStr | UltraLightSqueezeNet: A Deep Learning Architecture for Malaria Classification With Up to 54× Fewer Trainable Parameters for Resource Constrained Devices |
| title_full_unstemmed | UltraLightSqueezeNet: A Deep Learning Architecture for Malaria Classification With Up to 54× Fewer Trainable Parameters for Resource Constrained Devices |
| title_short | UltraLightSqueezeNet: A Deep Learning Architecture for Malaria Classification With Up to 54× Fewer Trainable Parameters for Resource Constrained Devices |
| title_sort | ultralightsqueezenet a deep learning architecture for malaria classification with up to 54 x00d7 fewer trainable parameters for resource constrained devices |
| topic | Malaria NIH dataset lightweight compact SqueezeNet fire module AlexNet |
| url | https://ieeexplore.ieee.org/document/11007680/ |
| work_keys_str_mv | AT sureshbabunettur ultralightsqueezenetadeeplearningarchitectureformalariaclassificationwithupto54x00d7fewertrainableparametersforresourceconstraineddevices AT shanthikarpurapu ultralightsqueezenetadeeplearningarchitectureformalariaclassificationwithupto54x00d7fewertrainableparametersforresourceconstraineddevices AT unnatinettur ultralightsqueezenetadeeplearningarchitectureformalariaclassificationwithupto54x00d7fewertrainableparametersforresourceconstraineddevices AT likhitsagargajja ultralightsqueezenetadeeplearningarchitectureformalariaclassificationwithupto54x00d7fewertrainableparametersforresourceconstraineddevices AT sravanthymyneni ultralightsqueezenetadeeplearningarchitectureformalariaclassificationwithupto54x00d7fewertrainableparametersforresourceconstraineddevices AT akhildusi ultralightsqueezenetadeeplearningarchitectureformalariaclassificationwithupto54x00d7fewertrainableparametersforresourceconstraineddevices AT lalithyaposham ultralightsqueezenetadeeplearningarchitectureformalariaclassificationwithupto54x00d7fewertrainableparametersforresourceconstraineddevices |