Optimisation of sparse deep autoencoders for dynamic network embedding
Abstract Network embedding (NE) tries to learn the potential properties of complex networks represented in a low‐dimensional feature space. However, the existing deep learning‐based NE methods are time‐consuming as they need to train a dense architecture for deep neural networks with extensive unkno...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2024-12-01
|
Series: | CAAI Transactions on Intelligence Technology |
Subjects: | |
Online Access: | https://doi.org/10.1049/cit2.12367 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841543352602853376 |
---|---|
author | Huimei Tang Yutao Zhang Lijia Ma Qiuzhen Lin Liping Huang Jianqiang Li Maoguo Gong |
author_facet | Huimei Tang Yutao Zhang Lijia Ma Qiuzhen Lin Liping Huang Jianqiang Li Maoguo Gong |
author_sort | Huimei Tang |
collection | DOAJ |
description | Abstract Network embedding (NE) tries to learn the potential properties of complex networks represented in a low‐dimensional feature space. However, the existing deep learning‐based NE methods are time‐consuming as they need to train a dense architecture for deep neural networks with extensive unknown weight parameters. A sparse deep autoencoder (called SPDNE) for dynamic NE is proposed, aiming to learn the network structures while preserving the node evolution with a low computational complexity. SPDNE tries to use an optimal sparse architecture to replace the fully connected architecture in the deep autoencoder while maintaining the performance of these models in the dynamic NE. Then, an adaptive simulated algorithm to find the optimal sparse architecture for the deep autoencoder is proposed. The performance of SPDNE over three dynamical NE models (i.e. sparse architecture‐based deep autoencoder method, DynGEM, and ElvDNE) is evaluated on three well‐known benchmark networks and five real‐world networks. The experimental results demonstrate that SPDNE can reduce about 70% of weight parameters of the architecture for the deep autoencoder during the training process while preserving the performance of these dynamical NE models. The results also show that SPDNE achieves the highest accuracy on 72 out of 96 edge prediction and network reconstruction tasks compared with the state‐of‐the‐art dynamical NE algorithms. |
format | Article |
id | doaj-art-f943469b92564a6dbb34831ccf9d8f76 |
institution | Kabale University |
issn | 2468-2322 |
language | English |
publishDate | 2024-12-01 |
publisher | Wiley |
record_format | Article |
series | CAAI Transactions on Intelligence Technology |
spelling | doaj-art-f943469b92564a6dbb34831ccf9d8f762025-01-13T14:05:51ZengWileyCAAI Transactions on Intelligence Technology2468-23222024-12-01961361137610.1049/cit2.12367Optimisation of sparse deep autoencoders for dynamic network embeddingHuimei Tang0Yutao Zhang1Lijia Ma2Qiuzhen Lin3Liping Huang4Jianqiang Li5Maoguo Gong6College of Computer Science and Software Engineering Shenzhen University Shenzhen ChinaCollege of Computer Science and Software Engineering Shenzhen University Shenzhen ChinaCollege of Computer Science and Software Engineering Shenzhen University Shenzhen ChinaCollege of Computer Science and Software Engineering Shenzhen University Shenzhen ChinaInstitute for Infocomm Research Agency for Science Technology and Research Singapore SingaporeCollege of Computer Science and Software Engineering Shenzhen University Shenzhen ChinaKey Laboratory of Collaborative Intelligence Systems Ministry of Education Xidian University Xi'an ChinaAbstract Network embedding (NE) tries to learn the potential properties of complex networks represented in a low‐dimensional feature space. However, the existing deep learning‐based NE methods are time‐consuming as they need to train a dense architecture for deep neural networks with extensive unknown weight parameters. A sparse deep autoencoder (called SPDNE) for dynamic NE is proposed, aiming to learn the network structures while preserving the node evolution with a low computational complexity. SPDNE tries to use an optimal sparse architecture to replace the fully connected architecture in the deep autoencoder while maintaining the performance of these models in the dynamic NE. Then, an adaptive simulated algorithm to find the optimal sparse architecture for the deep autoencoder is proposed. The performance of SPDNE over three dynamical NE models (i.e. sparse architecture‐based deep autoencoder method, DynGEM, and ElvDNE) is evaluated on three well‐known benchmark networks and five real‐world networks. The experimental results demonstrate that SPDNE can reduce about 70% of weight parameters of the architecture for the deep autoencoder during the training process while preserving the performance of these dynamical NE models. The results also show that SPDNE achieves the highest accuracy on 72 out of 96 edge prediction and network reconstruction tasks compared with the state‐of‐the‐art dynamical NE algorithms.https://doi.org/10.1049/cit2.12367deep autoencoderdynamic networkslow‐dimensional feature spacenetwork embeddingsparse structure |
spellingShingle | Huimei Tang Yutao Zhang Lijia Ma Qiuzhen Lin Liping Huang Jianqiang Li Maoguo Gong Optimisation of sparse deep autoencoders for dynamic network embedding CAAI Transactions on Intelligence Technology deep autoencoder dynamic networks low‐dimensional feature space network embedding sparse structure |
title | Optimisation of sparse deep autoencoders for dynamic network embedding |
title_full | Optimisation of sparse deep autoencoders for dynamic network embedding |
title_fullStr | Optimisation of sparse deep autoencoders for dynamic network embedding |
title_full_unstemmed | Optimisation of sparse deep autoencoders for dynamic network embedding |
title_short | Optimisation of sparse deep autoencoders for dynamic network embedding |
title_sort | optimisation of sparse deep autoencoders for dynamic network embedding |
topic | deep autoencoder dynamic networks low‐dimensional feature space network embedding sparse structure |
url | https://doi.org/10.1049/cit2.12367 |
work_keys_str_mv | AT huimeitang optimisationofsparsedeepautoencodersfordynamicnetworkembedding AT yutaozhang optimisationofsparsedeepautoencodersfordynamicnetworkembedding AT lijiama optimisationofsparsedeepautoencodersfordynamicnetworkembedding AT qiuzhenlin optimisationofsparsedeepautoencodersfordynamicnetworkembedding AT lipinghuang optimisationofsparsedeepautoencodersfordynamicnetworkembedding AT jianqiangli optimisationofsparsedeepautoencodersfordynamicnetworkembedding AT maoguogong optimisationofsparsedeepautoencodersfordynamicnetworkembedding |