Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems
Abstract Is Stochastic Gradient Descent (SGD) substantially different from Metropolis Monte Carlo dynamics? This is a fundamental question at the time of understanding the most used training algorithm in the field of Machine Learning, but it received no answer until now. Here we show that in discret...
Saved in:
Main Authors: | Maria Chiara Angelini, Angelo Giorgio Cavaliere, Raffaele Marino, Federico Ricci-Tersenghi |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2024-05-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-024-62625-8 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Forest fire risk assessment model optimized by stochastic average gradient descent
by: Zexin Fu, et al.
Published: (2025-01-01) -
Stochastic gradient descent algorithm preserving differential privacy in MapReduce framework
by: Yihan YU, et al.
Published: (2018-01-01) -
Smoothing gradient descent algorithm for the composite sparse optimization
by: Wei Yang, et al.
Published: (2024-11-01) -
Several Guaranteed Descent Conjugate Gradient Methods for Unconstrained Optimization
by: San-Yang Liu, et al.
Published: (2014-01-01) -
Function approximation method based on weights gradient descent in reinforcement learning
by: Xiaoyan QIN, et al.
Published: (2023-08-01)