A gradient-descent-like learning-based framework in surrogate-assisted evolutionary algorithms for expensive many-objective optimization
Abstract Surrogate-assisted evolutionary algorithms (SAEAs) commonly depend on traditional offspring generation methods such as simulated binary crossover and polynomial mutation, which often lead to suboptimal search efficiencies. This paper introduces the gradient-descent-like learning-based SAEA...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-06-01
|
| Series: | Complex & Intelligent Systems |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s40747-025-01955-0 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849331395245113344 |
|---|---|
| author | Chaoyi Sun Bo Zhang Hai Sun Rui Feng |
| author_facet | Chaoyi Sun Bo Zhang Hai Sun Rui Feng |
| author_sort | Chaoyi Sun |
| collection | DOAJ |
| description | Abstract Surrogate-assisted evolutionary algorithms (SAEAs) commonly depend on traditional offspring generation methods such as simulated binary crossover and polynomial mutation, which often lead to suboptimal search efficiencies. This paper introduces the gradient-descent-like learning-based SAEA (GDL-SAEA) designed for expensive multiobjective optimization problems. Our method aims to determine the learned possibly fastest convergence (i.e., gradient-descent-like) direction for each solution via a trained neural network, leveraging prior knowledge from the relationships within the current population and surrogate model. The population is segmented into three subgroups, generating triple training data through cosine similarity and the Mahalanobis distance. Notably, each elite solution within these groups serves as a label for the corresponding poor solution with a midpoint acting as an anchor, thereby enhancing the supervised learning of the convergence process. To implement the proposed framework, three representative SAEAs are embedded into the GDL-SAEA. Experimental evaluations on multiobjective benchmarks and real-world problems with up to 10 objectives reveal that GDL-SAEA outperforms seven state-of-the-art and classic algorithms. |
| format | Article |
| id | doaj-art-7741cd0ac1f44aa0aed1d01b0e85ef4f |
| institution | Kabale University |
| issn | 2199-4536 2198-6053 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | Springer |
| record_format | Article |
| series | Complex & Intelligent Systems |
| spelling | doaj-art-7741cd0ac1f44aa0aed1d01b0e85ef4f2025-08-20T03:46:37ZengSpringerComplex & Intelligent Systems2199-45362198-60532025-06-0111812810.1007/s40747-025-01955-0A gradient-descent-like learning-based framework in surrogate-assisted evolutionary algorithms for expensive many-objective optimizationChaoyi Sun0Bo Zhang1Hai Sun2Rui Feng3College of Computer Science and Artificial Intelligence, Shanghai Key Laboratory of Intelligent Information Processing, Fudan UniversityShanghai Publishing and Printing CollegeSchool of Management, Fudan UniversityCollege of Computer Science and Artificial Intelligence, Shanghai Key Laboratory of Intelligent Information Processing, Fudan UniversityAbstract Surrogate-assisted evolutionary algorithms (SAEAs) commonly depend on traditional offspring generation methods such as simulated binary crossover and polynomial mutation, which often lead to suboptimal search efficiencies. This paper introduces the gradient-descent-like learning-based SAEA (GDL-SAEA) designed for expensive multiobjective optimization problems. Our method aims to determine the learned possibly fastest convergence (i.e., gradient-descent-like) direction for each solution via a trained neural network, leveraging prior knowledge from the relationships within the current population and surrogate model. The population is segmented into three subgroups, generating triple training data through cosine similarity and the Mahalanobis distance. Notably, each elite solution within these groups serves as a label for the corresponding poor solution with a midpoint acting as an anchor, thereby enhancing the supervised learning of the convergence process. To implement the proposed framework, three representative SAEAs are embedded into the GDL-SAEA. Experimental evaluations on multiobjective benchmarks and real-world problems with up to 10 objectives reveal that GDL-SAEA outperforms seven state-of-the-art and classic algorithms.https://doi.org/10.1007/s40747-025-01955-0Expensive multiobjective optimizationGradient-descent-like learningOffspring generationSurrogate-assisted evolutionary algorithmsNeural networks |
| spellingShingle | Chaoyi Sun Bo Zhang Hai Sun Rui Feng A gradient-descent-like learning-based framework in surrogate-assisted evolutionary algorithms for expensive many-objective optimization Complex & Intelligent Systems Expensive multiobjective optimization Gradient-descent-like learning Offspring generation Surrogate-assisted evolutionary algorithms Neural networks |
| title | A gradient-descent-like learning-based framework in surrogate-assisted evolutionary algorithms for expensive many-objective optimization |
| title_full | A gradient-descent-like learning-based framework in surrogate-assisted evolutionary algorithms for expensive many-objective optimization |
| title_fullStr | A gradient-descent-like learning-based framework in surrogate-assisted evolutionary algorithms for expensive many-objective optimization |
| title_full_unstemmed | A gradient-descent-like learning-based framework in surrogate-assisted evolutionary algorithms for expensive many-objective optimization |
| title_short | A gradient-descent-like learning-based framework in surrogate-assisted evolutionary algorithms for expensive many-objective optimization |
| title_sort | gradient descent like learning based framework in surrogate assisted evolutionary algorithms for expensive many objective optimization |
| topic | Expensive multiobjective optimization Gradient-descent-like learning Offspring generation Surrogate-assisted evolutionary algorithms Neural networks |
| url | https://doi.org/10.1007/s40747-025-01955-0 |
| work_keys_str_mv | AT chaoyisun agradientdescentlikelearningbasedframeworkinsurrogateassistedevolutionaryalgorithmsforexpensivemanyobjectiveoptimization AT bozhang agradientdescentlikelearningbasedframeworkinsurrogateassistedevolutionaryalgorithmsforexpensivemanyobjectiveoptimization AT haisun agradientdescentlikelearningbasedframeworkinsurrogateassistedevolutionaryalgorithmsforexpensivemanyobjectiveoptimization AT ruifeng agradientdescentlikelearningbasedframeworkinsurrogateassistedevolutionaryalgorithmsforexpensivemanyobjectiveoptimization AT chaoyisun gradientdescentlikelearningbasedframeworkinsurrogateassistedevolutionaryalgorithmsforexpensivemanyobjectiveoptimization AT bozhang gradientdescentlikelearningbasedframeworkinsurrogateassistedevolutionaryalgorithmsforexpensivemanyobjectiveoptimization AT haisun gradientdescentlikelearningbasedframeworkinsurrogateassistedevolutionaryalgorithmsforexpensivemanyobjectiveoptimization AT ruifeng gradientdescentlikelearningbasedframeworkinsurrogateassistedevolutionaryalgorithmsforexpensivemanyobjectiveoptimization |