The study of the cognitive processes involved in a navigation task and their implementation on mobile robots naturally pose the problem of how a robot can learn to move in its environment. Our robot must behave like a rat to explore its environment in order to discover interesting places (providing a reward signal). We model a certain number of structures involved in this task (the hippocampus, the entorhinal cortex, the prefrontal cortex and the basal ganglia) in connection with a simplified model of the visual system and the system for processing idiothetic information. This task takes into account the following constraints: main information coming from the vision, no ad hoc marking of the environment (bitter or traces on the ground), no predefined metric map given to the system, and generally no information a priori on the environment. The resulting models are implemented on different mobile robots, and allow on the one hand to replicate experiments carried out in rats and to compare the results obtained with those observed in animals and on the other hand their performances are evaluated on scenarios of more complex navigation: in indoor (multi-room) and outdoor (both on and off-road in collaboration with the VEDECOM institute) environments.
However, thinking about the cognition of an isolated agent while ignoring the internal state of the system and the influence of social interactions on it can be a big mistake. Indeed, in addition to the communication functions, emotions can play a key role on several levels in the functioning of a robotic control architecture. In order to increase the autonomy of the developed robotic systems, our research also focusese on the mechanisms allowing a robot to self-evaluate its behavior via a neural mechanism for evaluating the prediction of its sensations and / or sensorimotor elements used by each navigation strategy. This continuous evaluation of the internal state of the robot (novelty, progress or regression) then leads to the emergence of an emotional state (for example Frustration). Different experiments conducted on real robots have shown how the neural activity of the system characterizing the internal state of the robot (motivational system, exit from the self-evaluation system) can simultaneously neuro-modulate the perception and selection of navigation strategies, as well as facilitate the learning of robotic behaviors and human-robot interaction.