1) Large non-verbal Language Models (LnvLM) for human-agent interaction and cooperation, at AIR-Lab, Tilburg University,
(with Giacomo Spigler).
When we think of communication, our first thought is language. Following this intuition, a vast amount of research in robotics (in particular, human-robot interaction (HRI)) has been dedicated to modeling spoken language. Nonetheless, a significant portion of information exchanged in human-human communication is non-verbal. Despite its importance, current non-verbal behavior by robots in HRI settings is either manually pre-programmed, or controlled by humans via teleoperation.
The objective of the project is to build on recent advances in language modeling and artificial intelligence to create large language models for non-verbal communication, including gestures, eye gaze, facial expressions, proxemics, etc. The developed models will be deployed on various robot platforms to test ecological validity of the proposed approach. Part of the validation will involve experiments with Reinforcement Learning from Human Feedback (RLHF), relying on the trained embeddings of non-verbal communication to both specify tasks and to extract reward signals.