Different working principles are often considered when different visual behaviors are implemented in an agent. This occurs basically because the physical interaction between the behavior and the environment is not studied in depth. This paper shows how apparently different visual behavior share common theoretical principles for their working mechanism. In particular properties related to the navigation vector field they compute in the environment, provide a base to explain visual learning, guidance, topological navigation, sub goal placement, obstacle avoidance and navigation enhancement. To handle the mathematics of a vector field robust tools are needed. Techniques borrowed from Computer Vision literature provide the necessary mathematical tools. All behaviors described here have been tested in real robots. On going research is still in progress for topological navigation and subgoal placement.
File in questo prodotto:
Non ci sono file associati a questo prodotto.