Title: Dynamic neural fields and manifold learning for audiovisual fusion in psychophysics and robotics
Abstract: For interactivity and cost-efficiency purposes, both biological and artificial agents (e.g., robots) usually rely on sets of complementary sensors. Each sensor samples information from only a subset of the environment, with both the subset and the precision of signals varying through time depending on the agent-environment configuration. Agents must therefore perform multimodal fusion to select and filter relevant information by contrasting the shortcomings and redundancies of different modalities. For the fusion, we propose to use dynamic neural fields (DNF), a training-free bio-inspired model of competition amid topologically-encoded information. We then combine it with a classical off-the-shelf manifold learning algorithm, and propose a new adaptation of DNF to irregular multimodal topologies. This coupling exhibits interesting properties, promising reliable decisions enhanced by the selection and attentional capabilities of DNF. In particular, the application of our method to audiovisual datasets (coming from either psychophysics or robotics) shows merged perceptions relying on the spatially-dependent precision of each modality, and robustness to irrelevant features.