This work is a first step towards an integration
of multimodality with the aim to make efficient
use of both human-like, and non-human-like
feedback modalities in order to optimize proactive information
retrieval from task-related Human-Robot
Interaction (HRI) in human environments. The presented
approach combines the human-like modalities
speech and emotional facialmimicry with non-humanlike
modalities. The proposed non-human-like modalities
are a screen displaying retrieved knowledge of the
robot to the human and a pointer mounted above the
robot head for pointing directions and referring to objects
in shared visual space as an equivalent for arm
and hand gestures. Initially, pre-interaction feedback
is explored in an experiment investigating different approach
behaviors in order to find socially acceptable
trajectories to increase the success of interactions and
thus efficiency of information retrieval. Secondly, preevaluated
human-like modalities are introduced. First
results of a multimodal feedback study are presented
in the context of the IURO project1, where a robot asks
for its way to a predefined goal location.
«
This work is a first step towards an integration
of multimodality with the aim to make efficient
use of both human-like, and non-human-like
feedback modalities in order to optimize proactive information
retrieval from task-related Human-Robot
Interaction (HRI) in human environments. The presented
approach combines the human-like modalities
speech and emotional facialmimicry with non-humanlike
modalities. The proposed non-human-like modalities
are a screen displaying retrieved knowledge...
»