Visual coverage of large scale environments is a challenging problem that has many practical applications such as large scale 3D reconstruction, search and rescue and active video surveillance. In this paper, we consider a setting where mobile robots must acquire visual information using standard cameras, while minimizing associated movement costs. The main source of complexity for such scenario is the lack of a priori knowledge of 3D structures for the surrounding environment. To address this problem, we propose a novel descriptor for visual coverage that aims at measuring the orientation dependent visual information of an area, based on a regular discretization of the 3D environment in voxels. Next, we use the proposed visual descriptor to define an autonomous cooperative exploration approach, which controls the robot movements so to maximize information accuracy and minimizing movement costs. We empirically evaluate our approach in a simulation scenario based on real data for large scale 3D environments, and on widely used robotic tools (such as ROS and Stage). Experimental results show that the proposed method significantly outperforms a baseline random approach and an uncoordinated one, thus being a valid proposal for visual coverage in large scale outdoor scenarios.
A directional visual descriptor for large-scale coverage problems
FARINELLI, Alessandro;MURINO, Vittorio;
2014-01-01
Abstract
Visual coverage of large scale environments is a challenging problem that has many practical applications such as large scale 3D reconstruction, search and rescue and active video surveillance. In this paper, we consider a setting where mobile robots must acquire visual information using standard cameras, while minimizing associated movement costs. The main source of complexity for such scenario is the lack of a priori knowledge of 3D structures for the surrounding environment. To address this problem, we propose a novel descriptor for visual coverage that aims at measuring the orientation dependent visual information of an area, based on a regular discretization of the 3D environment in voxels. Next, we use the proposed visual descriptor to define an autonomous cooperative exploration approach, which controls the robot movements so to maximize information accuracy and minimizing movement costs. We empirically evaluate our approach in a simulation scenario based on real data for large scale 3D environments, and on widely used robotic tools (such as ROS and Stage). Experimental results show that the proposed method significantly outperforms a baseline random approach and an uncoordinated one, thus being a valid proposal for visual coverage in large scale outdoor scenarios.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.