The goal of ALOOF is to equip autonomous systems with the ability to learn the meaning of objects, i.e. their perceptual and semantic properties and functionalities, from externalized knowledge sources accessible through the Web.
To achieve this, we will provide a mechanism for translating between the representations robots use in their real-world experience and those found on the Web. Our translation mechanism is a meta-modal representation (i.e. a representation which contains and structures representations from other modalities), composed of meta-modal entities and relations between them.
ALOOF will develop a novel meta-modal representation which will support the perception of objects in addition to encoding their associated semantics in ways that autonomous systems can reason about. This will be coupled with technologies to automatically populate the representation by learning about previously unknown objects from Web resources. This will result in a robot which can autonomously acquire object knowledge in domestic settings, including objects’ names; class properties; appearance and shape; storage locations; and typical usages/functions. This will allow robots in real-life situations to considerably advance their ability to deal with uncertainty, and learn from errors and incomplete sensor data.
Grounding Web-based knowledge into a semantic object map supports autonomy through the unsupervised acquisition of object knowledge when needed. This provides adaptability to novel situations. Obtaining missing knowledge from the Web will enable robots to operate autonomously in the real world, where the knowledge it requires can never be known in advance. Object perception will create scene and context understanding and planning capabilities that will allow the robot to react and adapt to changes by learning continuously as appropriate.