Visual coverage of large-scale environments is a key issue for several, important application scenarios, such as large-scale three-dimensional (3D) reconstruction, search and rescue, and active video surveillance. In this paper, we consider a setting where a group of robots, equipped with standard cameras, must fully cover a surrounding area while minimizing associated movement costs. To address this problem, we use a descriptor for visual coverage that encodes a measure of the visual information contained in an area, based on a regular discretization of the environment in voxels. In particular, our measure of coverage is related to how much of each voxel volume is "penetrated" by the bundle of rays projected from the camera center. Crucially, our descriptor explicitly encodes the viewing direction from which the voxel is observed, which is a key element to measure coverage of visual sensors. Next, we propose an autonomous cooperative exploration approach to control robot movements. Specifically, based on our visual coverage descriptor, we define a utility function to encode the value of future moves for the robots, and we select the next move for a robot by performing a greedy maximization of such utility function. Finally, we empirically evaluate our approach in a simulation scenario based on real visual data and on widely used robotic tools (such as ROS and Stage). Experimental results show that the proposed method outperforms baseline random approaches (i.e., random, semirandom, and uncoordinated) and thus is a valid solution for visual coverage in large-scale outdoor scenarios.

Directional Visual Descriptors and Multirobot Strategies for Large-Scale Coverage Problems

FARINELLI, Alessandro;MURINO, Vittorio;
2016-01-01

Abstract

Visual coverage of large-scale environments is a key issue for several, important application scenarios, such as large-scale three-dimensional (3D) reconstruction, search and rescue, and active video surveillance. In this paper, we consider a setting where a group of robots, equipped with standard cameras, must fully cover a surrounding area while minimizing associated movement costs. To address this problem, we use a descriptor for visual coverage that encodes a measure of the visual information contained in an area, based on a regular discretization of the environment in voxels. In particular, our measure of coverage is related to how much of each voxel volume is "penetrated" by the bundle of rays projected from the camera center. Crucially, our descriptor explicitly encodes the viewing direction from which the voxel is observed, which is a key element to measure coverage of visual sensors. Next, we propose an autonomous cooperative exploration approach to control robot movements. Specifically, based on our visual coverage descriptor, we define a utility function to encode the value of future moves for the robots, and we select the next move for a robot by performing a greedy maximization of such utility function. Finally, we empirically evaluate our approach in a simulation scenario based on real visual data and on widely used robotic tools (such as ROS and Stage). Experimental results show that the proposed method outperforms baseline random approaches (i.e., random, semirandom, and uncoordinated) and thus is a valid solution for visual coverage in large-scale outdoor scenarios.
2016
Cooperative exploration, Multi-robot systems, Three-dimensional (3-D) reconstruction
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/933104
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact