Journal title: Multimedia Tools and Applications
Advent of the 3D scene reconstruction framework using the RGB-D camera has enabled users to easily construct their indoor environment in a virtual space and has allowed them to experience an immersive augmented reality with the reconstructed 3D scene. Technically, the early stage of the 3D scene reconstruction framework using the RGB-D camera is based on the frame-to-model registration. It tries to iteratively estimate the transformation parameters of the camera between the incoming depth frame and its previously reconstructed 3D model. However, due to the nature of the frame-to-model registration, the conventional framework has an inherent drift problem caused by the accumulated alignment error. In this paper, we propose a new 3D scene reconstruction framework with the improved camera tracking capability to reduce the drift problem. There are two types of constraints in this work: colorimetric and geometric constraints. For the colorimetric constraint, we impose the more weights on the reliable feature correspondences obtained from color image frames. For the geometric constraint, we compute the consistent surface normal vector for the noisy point cloud data. Experimental results show that the proposed framework reduces the absolute trajectory error representing the amount of the drift and shows a more consistent trajectory in comparison to the conventional framework.