Deep learning for multi-modal data fusion in IoT applications
With the rapid changes in technology, the Internet of Things (IoT) has also emerged with many diverse applications. A massive amount of data is generated and processed through the IoT-based sensors from these applications every day. This sensor-based data is categorized as either structured or unstr...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Mehran University of Engineering and Technology
2025-01-01
|
Series: | Mehran University Research Journal of Engineering and Technology |
Online Access: | https://publications.muet.edu.pk/index.php/muetrj/article/view/3171 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841561191607959552 |
---|---|
author | Anila Saghir Anum Akbar Asma Zafar Asif Hassan |
author_facet | Anila Saghir Anum Akbar Asma Zafar Asif Hassan |
author_sort | Anila Saghir |
collection | DOAJ |
description | With the rapid changes in technology, the Internet of Things (IoT) has also emerged with many diverse applications. A massive amount of data is generated and processed through the IoT-based sensors from these applications every day. This sensor-based data is categorized as either structured or unstructured data. Structured data is simpler to process, while the processing of unstructured data is complex, due to its diverse modalities. In IoT applications such as autonomous navigation, environmental monitoring and smart surveillance, semantic segmentation is required, and it relies on detailed scene understanding. The single-modal data like RGB, thermal or depth images fails to provide this detailed information independently. This research proposes a robust solution by fusing the multimodal data and employing a deep learning-based hybrid architecture that incorporates a generative model with a deep convolutional network. The unified model fuses RGB, thermal and depth images for semantic segmentation to improve the accuracy and reliability. The successful results validate the effectiveness of the proposed technique. |
format | Article |
id | doaj-art-73562e909d2d4c2b8fc35662e14a7f22 |
institution | Kabale University |
issn | 0254-7821 2413-7219 |
language | English |
publishDate | 2025-01-01 |
publisher | Mehran University of Engineering and Technology |
record_format | Article |
series | Mehran University Research Journal of Engineering and Technology |
spelling | doaj-art-73562e909d2d4c2b8fc35662e14a7f222025-01-03T05:23:58ZengMehran University of Engineering and TechnologyMehran University Research Journal of Engineering and Technology0254-78212413-72192025-01-01441758110.22581/muet1982.31713171Deep learning for multi-modal data fusion in IoT applicationsAnila Saghir0Anum Akbar1Asma Zafar2Asif Hassan3Department of Telecommunication Engineering, Sir Syed University of Engineering Technology, KarachiDepartment of Computer Science, Sir Syed University of Engineering Technology, KarachiDepartment of Mathematics, Sir Syed University of Engineering Technology, Karachia Department of Telecommunication Engineering, Sir Syed University of Engineering Technology, KarachiWith the rapid changes in technology, the Internet of Things (IoT) has also emerged with many diverse applications. A massive amount of data is generated and processed through the IoT-based sensors from these applications every day. This sensor-based data is categorized as either structured or unstructured data. Structured data is simpler to process, while the processing of unstructured data is complex, due to its diverse modalities. In IoT applications such as autonomous navigation, environmental monitoring and smart surveillance, semantic segmentation is required, and it relies on detailed scene understanding. The single-modal data like RGB, thermal or depth images fails to provide this detailed information independently. This research proposes a robust solution by fusing the multimodal data and employing a deep learning-based hybrid architecture that incorporates a generative model with a deep convolutional network. The unified model fuses RGB, thermal and depth images for semantic segmentation to improve the accuracy and reliability. The successful results validate the effectiveness of the proposed technique.https://publications.muet.edu.pk/index.php/muetrj/article/view/3171 |
spellingShingle | Anila Saghir Anum Akbar Asma Zafar Asif Hassan Deep learning for multi-modal data fusion in IoT applications Mehran University Research Journal of Engineering and Technology |
title | Deep learning for multi-modal data fusion in IoT applications |
title_full | Deep learning for multi-modal data fusion in IoT applications |
title_fullStr | Deep learning for multi-modal data fusion in IoT applications |
title_full_unstemmed | Deep learning for multi-modal data fusion in IoT applications |
title_short | Deep learning for multi-modal data fusion in IoT applications |
title_sort | deep learning for multi modal data fusion in iot applications |
url | https://publications.muet.edu.pk/index.php/muetrj/article/view/3171 |
work_keys_str_mv | AT anilasaghir deeplearningformultimodaldatafusioniniotapplications AT anumakbar deeplearningformultimodaldatafusioniniotapplications AT asmazafar deeplearningformultimodaldatafusioniniotapplications AT asifhassan deeplearningformultimodaldatafusioniniotapplications |