Gradient Method with Step Adaptation

The paper solves the problem of constructing step adjustment algorithms for a gradient method based on the principle of the steepest descent. The expansion of the step adjustment principle, its formalization and parameterization led the researchers to gradient-type methods with incomplete relaxation...

Full description

Saved in:
Bibliographic Details
Main Authors: Vladimir Krutikov, Elena Tovbis, Svetlana Gutova, Ivan Rozhnov, Lev Kazakovtsev
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/1/61
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841549133635125248
author Vladimir Krutikov
Elena Tovbis
Svetlana Gutova
Ivan Rozhnov
Lev Kazakovtsev
author_facet Vladimir Krutikov
Elena Tovbis
Svetlana Gutova
Ivan Rozhnov
Lev Kazakovtsev
author_sort Vladimir Krutikov
collection DOAJ
description The paper solves the problem of constructing step adjustment algorithms for a gradient method based on the principle of the steepest descent. The expansion of the step adjustment principle, its formalization and parameterization led the researchers to gradient-type methods with incomplete relaxation or over-relaxation. Such methods require only the gradient of the function to be calculated at the iteration. Optimization of the parameters of the step adaptation algorithms enables us to obtain methods that significantly exceed the steepest descent method in terms of convergence rate. In this paper, we present a universal step adjustment algorithm that does not require selecting optimal parameters. The algorithm is based on orthogonality of successive gradients and replacing complete relaxation with some degree of incomplete relaxation or over-relaxation. Its convergence rate corresponds to algorithms with optimization of the step adaptation algorithm parameters. In our experiments, on average, the proposed algorithm outperforms the steepest descent method by 2.7 times in the number of iterations. The advantage of the proposed methods is their operability under interference conditions. Our paper presents examples of solving test problems in which the interference values are uniformly distributed vectors in a ball with a radius 8 times greater than the gradient norm.
format Article
id doaj-art-43911fb9daa343468d48ce5eaf7f722a
institution Kabale University
issn 2227-7390
language English
publishDate 2024-12-01
publisher MDPI AG
record_format Article
series Mathematics
spelling doaj-art-43911fb9daa343468d48ce5eaf7f722a2025-01-10T13:18:07ZengMDPI AGMathematics2227-73902024-12-011316110.3390/math13010061Gradient Method with Step AdaptationVladimir Krutikov0Elena Tovbis1Svetlana Gutova2Ivan Rozhnov3Lev Kazakovtsev4Institute of Informatics and Telecommunications, Reshetnev Siberian State University of Science and Technology, 31 Krasnoyarskii Rabochii Prospekt, 660037 Krasnoyarsk, RussiaInstitute of Informatics and Telecommunications, Reshetnev Siberian State University of Science and Technology, 31 Krasnoyarskii Rabochii Prospekt, 660037 Krasnoyarsk, RussiaDepartment of Applied Mathematics, Kemerovo State University, 6 Krasnaya Street, 650043 Kemerovo, RussiaInstitute of Informatics and Telecommunications, Reshetnev Siberian State University of Science and Technology, 31 Krasnoyarskii Rabochii Prospekt, 660037 Krasnoyarsk, RussiaInstitute of Informatics and Telecommunications, Reshetnev Siberian State University of Science and Technology, 31 Krasnoyarskii Rabochii Prospekt, 660037 Krasnoyarsk, RussiaThe paper solves the problem of constructing step adjustment algorithms for a gradient method based on the principle of the steepest descent. The expansion of the step adjustment principle, its formalization and parameterization led the researchers to gradient-type methods with incomplete relaxation or over-relaxation. Such methods require only the gradient of the function to be calculated at the iteration. Optimization of the parameters of the step adaptation algorithms enables us to obtain methods that significantly exceed the steepest descent method in terms of convergence rate. In this paper, we present a universal step adjustment algorithm that does not require selecting optimal parameters. The algorithm is based on orthogonality of successive gradients and replacing complete relaxation with some degree of incomplete relaxation or over-relaxation. Its convergence rate corresponds to algorithms with optimization of the step adaptation algorithm parameters. In our experiments, on average, the proposed algorithm outperforms the steepest descent method by 2.7 times in the number of iterations. The advantage of the proposed methods is their operability under interference conditions. Our paper presents examples of solving test problems in which the interference values are uniformly distributed vectors in a ball with a radius 8 times greater than the gradient norm.https://www.mdpi.com/2227-7390/13/1/61minimization methodrelaxationgradient methodstep adaptationconvergence rate
spellingShingle Vladimir Krutikov
Elena Tovbis
Svetlana Gutova
Ivan Rozhnov
Lev Kazakovtsev
Gradient Method with Step Adaptation
Mathematics
minimization method
relaxation
gradient method
step adaptation
convergence rate
title Gradient Method with Step Adaptation
title_full Gradient Method with Step Adaptation
title_fullStr Gradient Method with Step Adaptation
title_full_unstemmed Gradient Method with Step Adaptation
title_short Gradient Method with Step Adaptation
title_sort gradient method with step adaptation
topic minimization method
relaxation
gradient method
step adaptation
convergence rate
url https://www.mdpi.com/2227-7390/13/1/61
work_keys_str_mv AT vladimirkrutikov gradientmethodwithstepadaptation
AT elenatovbis gradientmethodwithstepadaptation
AT svetlanagutova gradientmethodwithstepadaptation
AT ivanrozhnov gradientmethodwithstepadaptation
AT levkazakovtsev gradientmethodwithstepadaptation