Learning Automata Based Incremental Learning Method for Deep Neural Networks

Deep learning methods have got fantastic performance on lots of large-scale datasets for machine learning tasks, such as visual recognition and neural language processing. Most of the progress on deep learning in recent years lied on supervised learning, for which the whole dataset with respect to a...

Full description

Saved in:
Bibliographic Details
Main Authors: Haonan Guo, Shilin Wang, Jianxun Fan, Shenghong Li
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8674746/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841554023824490496
author Haonan Guo
Shilin Wang
Jianxun Fan
Shenghong Li
author_facet Haonan Guo
Shilin Wang
Jianxun Fan
Shenghong Li
author_sort Haonan Guo
collection DOAJ
description Deep learning methods have got fantastic performance on lots of large-scale datasets for machine learning tasks, such as visual recognition and neural language processing. Most of the progress on deep learning in recent years lied on supervised learning, for which the whole dataset with respect to a specific task should be well-prepared before training. However, in the real-world scenario, the labeled data associated with the assigned classes are always gathered incrementally over time, since it is cumbersome work to collect and annotate the training data manually. This suggests the manner of sequentially training on a series of datasets with gradually added training samples belonging to new classes, which is called incremental learning. In this paper, we proposed an effective incremental training method based on learning automata for deep neural networks. The main thought is to train a deep model with dynamic connections which can be either “activated” or “deactivated” on different datasets of the incremental training stages. Our proposed method can relieve the destruction of old features while learning new features for the newly added training samples, which can lead to better training performance on the incremental learning stage. The experiments on MNIST and CIFAR-100 demonstrated that our method can be implemented for deep neural models in a long sequence of incremental training stages and can achieve superior performance than training from scratch and the fine-tuning method.
format Article
id doaj-art-383f7b2b7c93462d9727d0de7eb16f42
institution Kabale University
issn 2169-3536
language English
publishDate 2019-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-383f7b2b7c93462d9727d0de7eb16f422025-01-09T00:00:40ZengIEEEIEEE Access2169-35362019-01-017411644117110.1109/ACCESS.2019.29076458674746Learning Automata Based Incremental Learning Method for Deep Neural NetworksHaonan Guo0https://orcid.org/0000-0003-4450-0683Shilin Wang1Jianxun Fan2Shenghong Li3Department of Electronic Engineering, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, ChinaSchool of Cyber Security, Shanghai Jiao Tong University, Shanghai, ChinaElectronic Science and Technology Department, Beijing University of Posts and Telecommunications, Beijing, ChinaSchool of Cyber Security, Shanghai Jiao Tong University, Shanghai, ChinaDeep learning methods have got fantastic performance on lots of large-scale datasets for machine learning tasks, such as visual recognition and neural language processing. Most of the progress on deep learning in recent years lied on supervised learning, for which the whole dataset with respect to a specific task should be well-prepared before training. However, in the real-world scenario, the labeled data associated with the assigned classes are always gathered incrementally over time, since it is cumbersome work to collect and annotate the training data manually. This suggests the manner of sequentially training on a series of datasets with gradually added training samples belonging to new classes, which is called incremental learning. In this paper, we proposed an effective incremental training method based on learning automata for deep neural networks. The main thought is to train a deep model with dynamic connections which can be either “activated” or “deactivated” on different datasets of the incremental training stages. Our proposed method can relieve the destruction of old features while learning new features for the newly added training samples, which can lead to better training performance on the incremental learning stage. The experiments on MNIST and CIFAR-100 demonstrated that our method can be implemented for deep neural models in a long sequence of incremental training stages and can achieve superior performance than training from scratch and the fine-tuning method.https://ieeexplore.ieee.org/document/8674746/Supervised learningincremental learninglearning automata
spellingShingle Haonan Guo
Shilin Wang
Jianxun Fan
Shenghong Li
Learning Automata Based Incremental Learning Method for Deep Neural Networks
IEEE Access
Supervised learning
incremental learning
learning automata
title Learning Automata Based Incremental Learning Method for Deep Neural Networks
title_full Learning Automata Based Incremental Learning Method for Deep Neural Networks
title_fullStr Learning Automata Based Incremental Learning Method for Deep Neural Networks
title_full_unstemmed Learning Automata Based Incremental Learning Method for Deep Neural Networks
title_short Learning Automata Based Incremental Learning Method for Deep Neural Networks
title_sort learning automata based incremental learning method for deep neural networks
topic Supervised learning
incremental learning
learning automata
url https://ieeexplore.ieee.org/document/8674746/
work_keys_str_mv AT haonanguo learningautomatabasedincrementallearningmethodfordeepneuralnetworks
AT shilinwang learningautomatabasedincrementallearningmethodfordeepneuralnetworks
AT jianxunfan learningautomatabasedincrementallearningmethodfordeepneuralnetworks
AT shenghongli learningautomatabasedincrementallearningmethodfordeepneuralnetworks