Venue: 4Q56, UWE Frenchay
In the next decades robots will be used more and more in new application areas such us households, assisted living homes, and public spaces.
Furthermore, more collaborative robots will be used professionally in the future, for example by workers on factory floors and by operators in extreme and hazardous environments. This means that there will be a growing user base of people who will interact with robots on a regular basis, who have not been trained to use robots and who do not have the technical background to know how robotics technology works.
Researchers in Embodied Cognition for Human-Robot Interaction are investigating in the necessary cognitive skills for robots to interact with humans in a natural and socially appropriate way. The research in this field has two parts to it. On one hand, there is the technical challenge of designing complex robot architectures that combine software components for multimodal input recognition, decision making, and multimodal output generation with appropriate robot hardware. On the other hand, there is the challenge to study the human factors of human-robot interaction. What appearance should a robot have to support a given task? Should the robot show social behaviour when interacting with humans? How do humans perceive working together with robots depending on task context and application area?
In this talk, Dr Giuliani will give an overview of past and present research in Embodied Cognition for Human-Robot Interaction, showing its relevance to different application areas such us manufacturing, social interaction, and nuclear decommissioning. He will also attempt to look into the future of the field and discuss research questions that still need to be addressed.
Speaker: Professor Manuel Giuliani
Manuel Giuliani is Professor in Embedded Cognitive AI for Robotics at the Bristol Robotics Laboratory (BRL), University of the West of England, Bristol. At BRL, he leads the ECHOS group (Embodied Cognition for Human RObot InteractionS). He received a Master of Arts in computational linguistics from Ludwig-Maximilian-University Munich, a Master of Science, and a PhD in computer science from Technical University Munich. Currently, he is Co-Investigator on the EPSRC-funded projects NCNR (National Centre for Nuclear Robotics), RNE (Robotics for Nuclear Environments), and DigiTOP (Digital Toolkit for Optimisation of Operators and Technology in Manufacturing Partnerships). In the past, he worked on the European projects JAST (Joint Action Science and Technology), JAMES (Joint Action for Multimodal Embodied Social Systems), ReMeDi (Remote Medical Diagnostician), the Cluster of Excellence CoTeSys (Cognition for Technical Systems), and the Austrian Christian-Doppler-Laboratory "Contextual Interfaces". Before going to Bristol, Manuel worked at the Technical University of Munich, fortiss GmbH in Munich, and the Center for Human-Computer Interaction at the University of Salzburg, where he led the Human-Robot Interaction group.
His research interests include human-robot interaction, social robotics, natural language processing, multimodal fusion, multimodal output generation, augmented and virtual reality interfaces, and embedded cognitive robot architectures.