Sharing verbal tasks with robots may facilitate our speech
Can robots make it easier for humans to produce spoken language?
According to a new Science of Intelligence study led by Olga Wudarczyk, Murat Kirtay, Doris Pischedda, Verena Hafner, John-Dylan Haynes, Anna Kuhlen, and Rasha Abdel Rahman, published recently in Scientific Reports, the answer is “yes”.
The study, which focused on verbal communication between humans and robots, aimed to assess the extent to which humans simulate and predict (or co-represent) the verbal actions of a robot through a picture–naming task where human participants and a robot (named Pepper and produced by Softbank Robotics) take turns in naming objects.
It is known from previous findings that the same task performed by humans with human partners causes participants to engage in internal simulations that trigger a process of lexical selection involving the search for the best possible term to use, and that this inhibits and slows down the naming of objects.
When performing the task with the robot during the study, the human participants did not show any of the partner-elicited inhibitory effects. On the contrary, participants experienced a facilitating effect, which resulted in faster naming of the objects shown. This facilitation suggests that human simulation of robots does not occur down to the level of lexical selection, but rather only at the initial level of language production, where the meaning of the verbal message is generated. And this results in facilitated language production.
The researchers concluded that robots facilitate core conceptualization processes when humans transform thoughts to language during speaking, meaning that verbal interactions with robots might be advantageous in facilitating spoken language production. This is an important result for the development and use of social robots in many contexts, both clinical (for example those where speech delays and lexical retrieval difficulties prevail) and non-clinical.