A prediction model for the mechanical properties of SUS316 stainless steel ultrathin strip driven by multimodal data mixing

Constructing a mapping relationship among material preparation process, microstructure, and mechanical properties is a challenge in material research and development. In this work, a deep learning framework for multimodal data fusion is constructed that couples a multi-layer perceptron (MLP) and a r...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhenhua Wang, Pengzhan Wang, Yunfei Liu, Yuanming Liu, Tao Wang
Format: Article
Language:English
Published: Elsevier 2024-12-01
Series:Materials & Design
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S0264127524008797
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Constructing a mapping relationship among material preparation process, microstructure, and mechanical properties is a challenge in material research and development. In this work, a deep learning framework for multimodal data fusion is constructed that couples a multi-layer perceptron (MLP) and a residual neural network (ResNet) to predict mechanical properties of SUS316 stainless steel ultrathin strips. Specifically, the MLP branch is used to extract the rolling process data features, and the ResNet with the addition of a convolutional block attention module (CBAM) is used to extract the microstructure features. Six models are constructed for comparison under the comprehensive consideration of factors such as unimodal network, the multimodal network and input form of image samples. The results show that the multimodal data model fused with the ResNet and MLP after adding the CBAM using both rolling process data and four types of microstructure image data as model inputs has the most accurate prediction results. The R2, MAPE, RMSE and MAE are 0.998, 0.727, 4.440 and 3.359, respectively. In addition, the proposed model is used for predicting yield strength and elongation, and the results show that the R2 values of both models on the test set are greater than 0.980, fully confirming that the multimodal data model has high prediction accuracy and good generalizability.
ISSN:0264-1275