Multi-View 3D Modeling of Large Objects for Robotic Piercing on Unknown Free-Form Surfaces
##plugins.themes.bootstrap3.article.main##
Abstract
This paper presents a method for 3D modeling of unknown large objects using a depth camera and executing hole-piercing operations at specified target locations on the modeled surface. To address the limitations of single-viewpoint perception, a mobile manipulator equipped with a depth camera adopts a multi-view point cloud acquisition strategy. The proposed method fuses multi-view point cloud data by using Iterative Closest Point (ICP) registration, combined with outlier removal and smoothing techniques, to generate an accurate and complete 3D model. To further improve segmentation and object isolation, DBSCAN clustering is applied. The experimental platform includes a 3D LiDAR installed on the mobile base for mapping the environment, while point clouds from the depth camera are aligned to a global coordinate system. Experimental result shows that the root mean square error (RMSE) of 3D modeling of a box-shaped object is 7.84 mm. Based on the reconstructed model, automated piercing operations on two large objects have been demonstrated using the mobile manipulator. This multi-view 3D reconstruction framework allows for vision-based automated reconstructing and machining of large, unknown surfaces.
Download Statistics
##plugins.themes.bootstrap3.article.details##
3D modeling, Point cloud, Data fusion, Robot control

This work is licensed under a Creative Commons Attribution 4.0 International License.
Creative Commons CC BY 4.0