Grasp Area Detection for 3D Object using Enhanced Dynamic Graph Convolutional Neural Network

Robots have become integral to modern society, taking over both complex and routine human tasks. Recent advancements in depth camera technology have propelled computer vision-based robotics into a prominent field of research. Many robotic tasks—such as picking up, carrying, and utilizing tools or ob...

Full description

Saved in:
Bibliographic Details
Main Authors: Haniye Merrikhi, Hossein Ebrahimnezhad
Format: Article
Language:English
Published: Iran University of Science and Technology 2024-11-01
Series:Iranian Journal of Electrical and Electronic Engineering
Subjects:
Online Access:http://ijeee.iust.ac.ir/article-1-3472-en.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841550973085941760
author Haniye Merrikhi
Hossein Ebrahimnezhad
author_facet Haniye Merrikhi
Hossein Ebrahimnezhad
author_sort Haniye Merrikhi
collection DOAJ
description Robots have become integral to modern society, taking over both complex and routine human tasks. Recent advancements in depth camera technology have propelled computer vision-based robotics into a prominent field of research. Many robotic tasks—such as picking up, carrying, and utilizing tools or objects—begin with an initial grasping step. Vision-based grasping requires the precise identification of grasp locations on objects, making the segmentation of objects into meaningful components a crucial stage in robotic grasping. In this paper, we present a system designed to detect the graspable parts of objects for a specific task. Recognizing that everyday household items are typically grasped at certain sections for carrying, we created a database of these objects and their corresponding graspable parts. Building on the success of the Dynamic Graph CNN (DGCNN) network in segmenting object components, we enhanced this network to detect the graspable areas of objects. The enhanced network was trained on the compiled database, and the visual results, along with the obtained Intersection over Union (IoU) metrics, demonstrate its success in detecting graspable regions. It achieved a grand mean IoU (gmIoU) of 92.57% across all classes, outperforming established networks such as PointNet++ in part segmentation for this dataset. Furthermore, statistical analysis using analysis of variance (ANOVA) and T-test validates the superiority of our method.
format Article
id doaj-art-1a5d5868dd2a44aa8465a151bbd2f78d
institution Kabale University
issn 1735-2827
2383-3890
language English
publishDate 2024-11-01
publisher Iran University of Science and Technology
record_format Article
series Iranian Journal of Electrical and Electronic Engineering
spelling doaj-art-1a5d5868dd2a44aa8465a151bbd2f78d2025-01-09T18:47:15ZengIran University of Science and TechnologyIranian Journal of Electrical and Electronic Engineering1735-28272383-38902024-11-01204134146Grasp Area Detection for 3D Object using Enhanced Dynamic Graph Convolutional Neural NetworkHaniye Merrikhi0Hossein Ebrahimnezhad1 Computer Vision Res. Lab., Electrical Engineering Faculty, Sahand University of Technology, Tabriz, Iran. Computer Vision Res. Lab., Electrical Engineering Faculty, Sahand University of Technology, Tabriz, Iran. Robots have become integral to modern society, taking over both complex and routine human tasks. Recent advancements in depth camera technology have propelled computer vision-based robotics into a prominent field of research. Many robotic tasks—such as picking up, carrying, and utilizing tools or objects—begin with an initial grasping step. Vision-based grasping requires the precise identification of grasp locations on objects, making the segmentation of objects into meaningful components a crucial stage in robotic grasping. In this paper, we present a system designed to detect the graspable parts of objects for a specific task. Recognizing that everyday household items are typically grasped at certain sections for carrying, we created a database of these objects and their corresponding graspable parts. Building on the success of the Dynamic Graph CNN (DGCNN) network in segmenting object components, we enhanced this network to detect the graspable areas of objects. The enhanced network was trained on the compiled database, and the visual results, along with the obtained Intersection over Union (IoU) metrics, demonstrate its success in detecting graspable regions. It achieved a grand mean IoU (gmIoU) of 92.57% across all classes, outperforming established networks such as PointNet++ in part segmentation for this dataset. Furthermore, statistical analysis using analysis of variance (ANOVA) and T-test validates the superiority of our method.http://ijeee.iust.ac.ir/article-1-3472-en.pdfrobotic graspgrasp areapoint cloudpart segmentationdynamic graph cnn
spellingShingle Haniye Merrikhi
Hossein Ebrahimnezhad
Grasp Area Detection for 3D Object using Enhanced Dynamic Graph Convolutional Neural Network
Iranian Journal of Electrical and Electronic Engineering
robotic grasp
grasp area
point cloud
part segmentation
dynamic graph cnn
title Grasp Area Detection for 3D Object using Enhanced Dynamic Graph Convolutional Neural Network
title_full Grasp Area Detection for 3D Object using Enhanced Dynamic Graph Convolutional Neural Network
title_fullStr Grasp Area Detection for 3D Object using Enhanced Dynamic Graph Convolutional Neural Network
title_full_unstemmed Grasp Area Detection for 3D Object using Enhanced Dynamic Graph Convolutional Neural Network
title_short Grasp Area Detection for 3D Object using Enhanced Dynamic Graph Convolutional Neural Network
title_sort grasp area detection for 3d object using enhanced dynamic graph convolutional neural network
topic robotic grasp
grasp area
point cloud
part segmentation
dynamic graph cnn
url http://ijeee.iust.ac.ir/article-1-3472-en.pdf
work_keys_str_mv AT haniyemerrikhi graspareadetectionfor3dobjectusingenhanceddynamicgraphconvolutionalneuralnetwork
AT hosseinebrahimnezhad graspareadetectionfor3dobjectusingenhanceddynamicgraphconvolutionalneuralnetwork