Gradient Amplification: An Efficient Way to Train Deep Neural Networks
Improving performance of deep learning models and reducing their training times are ongoing challenges in deep neural networks. There are several approaches proposed to address these challenges, one of which is to increase the depth of the neural networks. Such deeper networks not only increase trai...
Saved in:
Main Authors: | Sunitha Basodi, Chunyan Ji, Haiping Zhang, Yi Pan |
---|---|
Format: | Article |
Language: | English |
Published: |
Tsinghua University Press
2020-09-01
|
Series: | Big Data Mining and Analytics |
Subjects: | |
Online Access: | https://www.sciopen.com/article/10.26599/BDMA.2020.9020004 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Enhancing classification efficiency in capsule networks through windowed routing: tackling gradient vanishing, dynamic routing, and computational complexity challenges
by: Gangqi Chen, et al.
Published: (2024-11-01) -
Comparative Analysis of Gradient Descent Learning Algorithms in Artificial Neural Networks for Forecasting Indonesian Rice Prices
by: Rica Ramadana, et al.
Published: (2024-08-01) -
On Quantum Natural Policy Gradients
by: Andre Sequeira, et al.
Published: (2024-01-01) -
Function approximation method based on weights gradient descent in reinforcement learning
by: Xiaoyan QIN, et al.
Published: (2023-08-01) -
Actor-critic algorithm with incremental dual natural policy gradient
by: Peng ZHANG, et al.
Published: (2017-04-01)