In this paper an omnidirectional Distributed Vision System (DVS) is presented. The presented DVS is able to learn to navigate a mobile robot in its working environment without any prior knowledge about calibration parameters of the cameras or the control law of the robot (this is an important feature if we want to apply this system to existing camera networks). The DVS consists of different Vision Agents (VAs) implemented by omnidirectional cameras. The main contribution of the work is the explicit distribution of the acquired knowledge in the DVS. The aim is to develop a totally autonomous system able not only to learn control policies by on-line learning, but also to deal with a changing environment and to improve its performance during lifetime. Once an initial knowledge is acquired by one Vision Agent, this knowledge can be transferred to other Vision Agents in order to exploit what was already learned. In this paper, first we investigate how the Vision Agent learns the knowledge, then we evaluate its performance and test the knowledge propagation on three different VAs. Experiments are reported both using a system simulator and using a prototype of the Distributed Vision System in a real environment demonstrating the feasibility of the approach.

Knowledge propagation in a distributed omnidirectional vision system

MENEGATTI, EMANUELE;PAGELLO, ENRICO
2007

Abstract

In this paper an omnidirectional Distributed Vision System (DVS) is presented. The presented DVS is able to learn to navigate a mobile robot in its working environment without any prior knowledge about calibration parameters of the cameras or the control law of the robot (this is an important feature if we want to apply this system to existing camera networks). The DVS consists of different Vision Agents (VAs) implemented by omnidirectional cameras. The main contribution of the work is the explicit distribution of the acquired knowledge in the DVS. The aim is to develop a totally autonomous system able not only to learn control policies by on-line learning, but also to deal with a changing environment and to improve its performance during lifetime. Once an initial knowledge is acquired by one Vision Agent, this knowledge can be transferred to other Vision Agents in order to exploit what was already learned. In this paper, first we investigate how the Vision Agent learns the knowledge, then we evaluate its performance and test the knowledge propagation on three different VAs. Experiments are reported both using a system simulator and using a prototype of the Distributed Vision System in a real environment demonstrating the feasibility of the approach.
2007
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/2446758
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact