Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems

Abstract Is Stochastic Gradient Descent (SGD) substantially different from Metropolis Monte Carlo dynamics? This is a fundamental question at the time of understanding the most used training algorithm in the field of Machine Learning, but it received no answer until now. Here we show that in discret...

Full description

Saved in:
Bibliographic Details
Main Authors: Maria Chiara Angelini, Angelo Giorgio Cavaliere, Raffaele Marino, Federico Ricci-Tersenghi
Format: Article
Language:English
Published: Nature Portfolio 2024-05-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-024-62625-8
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841559411302072320
author Maria Chiara Angelini
Angelo Giorgio Cavaliere
Raffaele Marino
Federico Ricci-Tersenghi
author_facet Maria Chiara Angelini
Angelo Giorgio Cavaliere
Raffaele Marino
Federico Ricci-Tersenghi
author_sort Maria Chiara Angelini
collection DOAJ
description Abstract Is Stochastic Gradient Descent (SGD) substantially different from Metropolis Monte Carlo dynamics? This is a fundamental question at the time of understanding the most used training algorithm in the field of Machine Learning, but it received no answer until now. Here we show that in discrete optimization and inference problems, the dynamics of an SGD-like algorithm resemble very closely that of Metropolis Monte Carlo with a properly chosen temperature, which depends on the mini-batch size. This quantitative matching holds both at equilibrium and in the out-of-equilibrium regime, despite the two algorithms having fundamental differences (e.g. SGD does not satisfy detailed balance). Such equivalence allows us to use results about performances and limits of Monte Carlo algorithms to optimize the mini-batch size in the SGD-like algorithm and make it efficient at recovering the signal in hard inference problems.
format Article
id doaj-art-11b517cd5601424d83335783f294c3f0
institution Kabale University
issn 2045-2322
language English
publishDate 2024-05-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-11b517cd5601424d83335783f294c3f02025-01-05T12:28:01ZengNature PortfolioScientific Reports2045-23222024-05-0114111110.1038/s41598-024-62625-8Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problemsMaria Chiara Angelini0Angelo Giorgio Cavaliere1Raffaele Marino2Federico Ricci-Tersenghi3Dipartimento di Fisica, Sapienza Università di RomaCybermedia Center, Osaka UniversityDipartimento di Fisica e Astronomia, Università degli studi di FirenzeDipartimento di Fisica, Sapienza Università di RomaAbstract Is Stochastic Gradient Descent (SGD) substantially different from Metropolis Monte Carlo dynamics? This is a fundamental question at the time of understanding the most used training algorithm in the field of Machine Learning, but it received no answer until now. Here we show that in discrete optimization and inference problems, the dynamics of an SGD-like algorithm resemble very closely that of Metropolis Monte Carlo with a properly chosen temperature, which depends on the mini-batch size. This quantitative matching holds both at equilibrium and in the out-of-equilibrium regime, despite the two algorithms having fundamental differences (e.g. SGD does not satisfy detailed balance). Such equivalence allows us to use results about performances and limits of Monte Carlo algorithms to optimize the mini-batch size in the SGD-like algorithm and make it efficient at recovering the signal in hard inference problems.https://doi.org/10.1038/s41598-024-62625-8
spellingShingle Maria Chiara Angelini
Angelo Giorgio Cavaliere
Raffaele Marino
Federico Ricci-Tersenghi
Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems
Scientific Reports
title Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems
title_full Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems
title_fullStr Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems
title_full_unstemmed Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems
title_short Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems
title_sort stochastic gradient descent like relaxation is equivalent to metropolis dynamics in discrete optimization and inference problems
url https://doi.org/10.1038/s41598-024-62625-8
work_keys_str_mv AT mariachiaraangelini stochasticgradientdescentlikerelaxationisequivalenttometropolisdynamicsindiscreteoptimizationandinferenceproblems
AT angelogiorgiocavaliere stochasticgradientdescentlikerelaxationisequivalenttometropolisdynamicsindiscreteoptimizationandinferenceproblems
AT raffaelemarino stochasticgradientdescentlikerelaxationisequivalenttometropolisdynamicsindiscreteoptimizationandinferenceproblems
AT federicoriccitersenghi stochasticgradientdescentlikerelaxationisequivalenttometropolisdynamicsindiscreteoptimizationandinferenceproblems