Leveraging Segment Anything Model (SAM) for Weld Defect Detection in Industrial Ultrasonic B-Scan Images
Automated ultrasonic testing (AUT) is a critical tool for infrastructure evaluation in industries such as oil and gas, and, while skilled operators manually analyze complex AUT data, artificial intelligence (AI)-based methods show promise for automating interpretation. However, improving the reliabi...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/1/277 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841548913704697856 |
---|---|
author | Amir-M. Naddaf-Sh Vinay S. Baburao Hassan Zargarzadeh |
author_facet | Amir-M. Naddaf-Sh Vinay S. Baburao Hassan Zargarzadeh |
author_sort | Amir-M. Naddaf-Sh |
collection | DOAJ |
description | Automated ultrasonic testing (AUT) is a critical tool for infrastructure evaluation in industries such as oil and gas, and, while skilled operators manually analyze complex AUT data, artificial intelligence (AI)-based methods show promise for automating interpretation. However, improving the reliability and effectiveness of these methods remains a significant challenge. This study employs the Segment Anything Model (SAM), a vision foundation model, to design an AI-assisted tool for weld defect detection in real-world ultrasonic B-scan images. It utilizes a proprietary dataset of B-scan images generated from AUT data collected during automated girth weld inspections of oil and gas pipelines, detecting a specific defect type: lack of fusion (LOF). The implementation includes integrating knowledge from the B-scan image context into the natural image-based SAM 1 and SAM 2 through a fully automated, promptable process. As part of designing a practical AI-assistant tool, the experiments involve applying both vanilla and low-rank adaptation (LoRA) fine-tuning techniques to the image encoder and mask decoder of different variants of both models, while keeping the prompt encoder unchanged. The results demonstrate that the utilized method achieves improved performance compared to a previous study on the same dataset. |
format | Article |
id | doaj-art-443d1e9371f940569cdd83bab3a65bd2 |
institution | Kabale University |
issn | 1424-8220 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj-art-443d1e9371f940569cdd83bab3a65bd22025-01-10T13:21:26ZengMDPI AGSensors1424-82202025-01-0125127710.3390/s25010277Leveraging Segment Anything Model (SAM) for Weld Defect Detection in Industrial Ultrasonic B-Scan ImagesAmir-M. Naddaf-Sh0Vinay S. Baburao1Hassan Zargarzadeh2Phillip M. Drayer Electrical Engineering Department, Lamar University, Beaumont, TX 77705, USACRC-Evans, Houston, TX 77066, USAPhillip M. Drayer Electrical Engineering Department, Lamar University, Beaumont, TX 77705, USAAutomated ultrasonic testing (AUT) is a critical tool for infrastructure evaluation in industries such as oil and gas, and, while skilled operators manually analyze complex AUT data, artificial intelligence (AI)-based methods show promise for automating interpretation. However, improving the reliability and effectiveness of these methods remains a significant challenge. This study employs the Segment Anything Model (SAM), a vision foundation model, to design an AI-assisted tool for weld defect detection in real-world ultrasonic B-scan images. It utilizes a proprietary dataset of B-scan images generated from AUT data collected during automated girth weld inspections of oil and gas pipelines, detecting a specific defect type: lack of fusion (LOF). The implementation includes integrating knowledge from the B-scan image context into the natural image-based SAM 1 and SAM 2 through a fully automated, promptable process. As part of designing a practical AI-assistant tool, the experiments involve applying both vanilla and low-rank adaptation (LoRA) fine-tuning techniques to the image encoder and mask decoder of different variants of both models, while keeping the prompt encoder unchanged. The results demonstrate that the utilized method achieves improved performance compared to a previous study on the same dataset.https://www.mdpi.com/1424-8220/25/1/277automated ultrasonic testingsegment anything modelvision foundation modeldefect detectionnondestructive testing |
spellingShingle | Amir-M. Naddaf-Sh Vinay S. Baburao Hassan Zargarzadeh Leveraging Segment Anything Model (SAM) for Weld Defect Detection in Industrial Ultrasonic B-Scan Images Sensors automated ultrasonic testing segment anything model vision foundation model defect detection nondestructive testing |
title | Leveraging Segment Anything Model (SAM) for Weld Defect Detection in Industrial Ultrasonic B-Scan Images |
title_full | Leveraging Segment Anything Model (SAM) for Weld Defect Detection in Industrial Ultrasonic B-Scan Images |
title_fullStr | Leveraging Segment Anything Model (SAM) for Weld Defect Detection in Industrial Ultrasonic B-Scan Images |
title_full_unstemmed | Leveraging Segment Anything Model (SAM) for Weld Defect Detection in Industrial Ultrasonic B-Scan Images |
title_short | Leveraging Segment Anything Model (SAM) for Weld Defect Detection in Industrial Ultrasonic B-Scan Images |
title_sort | leveraging segment anything model sam for weld defect detection in industrial ultrasonic b scan images |
topic | automated ultrasonic testing segment anything model vision foundation model defect detection nondestructive testing |
url | https://www.mdpi.com/1424-8220/25/1/277 |
work_keys_str_mv | AT amirmnaddafsh leveragingsegmentanythingmodelsamforwelddefectdetectioninindustrialultrasonicbscanimages AT vinaysbaburao leveragingsegmentanythingmodelsamforwelddefectdetectioninindustrialultrasonicbscanimages AT hassanzargarzadeh leveragingsegmentanythingmodelsamforwelddefectdetectioninindustrialultrasonicbscanimages |