Abstract
In field or indoor environments it is often not possible to provide robots or robotic teams with detailed a priori environment and task models. In such environments, robots will need to create a dimensionally accurate geometric model by moving around and scanning the surroundings with their sensors. In the case of robotic teams, there is a further need of cooperatively sharing the acquired data. However, uncertainties in robot locations and sensing limitations/occlusions make this difficult. A novel informationbased methodology based on iterative sensor planning and sensor redundancy is presented to build a geometrically consistent dimensional map of the environment and task. The proposed algorithm efficiently repositions the systems’ sensing agents using an information theoretic approach and fuses sensory information using physical models to yield a geometrically consistent environment map. This is achieved by utilizing a metric derived from Shannon’s information theory to plan the robots’ visual exploration strategy, determining optimal sensing poses for the agent(s) mapping a highly unstructured environment. This map is then distributed among the agents (if robotic teams are considered) using an information-based relevant data reduction scheme. This methodology is unique in the application of information theory to enhance the performance of cooperative sensing robot teams. It may be used by multiple distributed and decentralized sensing agents for an efficient and accurate environment modeling. The algorithm makes no assumptions of the environment structure. Hence it is robust to robot failure since the environment model being built is not dependent on any single agent frame. It accounts for sensing uncertainty, robot motion uncertainty, environment model uncertainty and other critical parameters, allowing for regions of higher interest to get more attention by the agent(s). The methodology works with mobile robots (or vehicles) with eye-in-hand vision sensors to provide 3-D or 2.5-D information of the environment. The presented methodologies are particularly well suited to unstructured environments, where sensor uncertainty is significant. Simulation and experimental results show the effectiveness of this approach. A cooperative multi-agent sensing architecture is presented and applied to the mapping of a cliff surface using the JPL Sample Return Rover (SRR). The information-based methods are shown to significantly improve mapping efficiency over conventional ones, with the potential benefit to reduce the cost of autonomous mobile systems.
Keywords: Visual mapping, Cooperative Robots, Information Theory, Unstructured Environments, Data Fusion.