Temporal Bag-of-Words - A Generative Model for Visual Place Recognition using Temporal Integration
Abstract
This paper presents an original approach for visual place recognition and categorization. The simple idea behind our model is that, for a mobile robot, use of the previous frames, and not only the one, can ease recognition. We present an algorithm for integrating the answers from different images. In this perspective, scenes are encoded thanks to a global signature (the context of a scene) and then classified in an unsupervised way with a Self-Organizing Map. The prototypes form a visual dictionary which can roughly describe the environment. A place can then be learnt and represented through the frequency of the prototypes. This approach is a variant of Bag-of-Words approaches used in the domain of scene classification with the major difference that the different "words" are not taken from the same image but from temporally ordered images. Temporal integration allows us to use Bag-of-Words together with a global characterization of scenes. We evaluate our system with the COLD database. We perform a place recognition task and a place categorization task. Despite its simplicity, thanks to temporal integration of visual cues, our system achieves state-of-the-art performances.
Origin | Files produced by the author(s) |
---|