MIA
Human-Robot Interaction at the Workplace
(Mensch-Roboter Interaktion im Arbeitsleben bewegungseingeschränkter Personen)
Motivation
Human-robot workplaces where people and robots work together cooperatively are part of the industry of tomorrow. This industry integrates new services, in the so called, "Workplace as a Service" wherein each service can be taken care individually. In a workplace setting people have their hands already occupied in different tasks; leaving room for exploring new communication and interaction technologies, which may include a robotic system. Moreover, people with disabilities may benefit from these type of technologies, since they will increase their integration into the work market.
Goals and Approach
In the MIA research project, innovative sensor technologies and interaction designs are being developed in order to make the complex robot control manageable for people who are able to move their heads and eyes. We intend to use different technologies such as: inertial measurement units (IMU), eye tracking or electrooculography (EOG), as well as provide feedback through augmented reality. In this context, our research is oriented towards testing and evaluating new concepts about robot control and interaction possibilities for humans.
Innovations and Perspectives
The research results will enable the design of a new collaborative human-robot workplace. Since this research is supported by empirical studies, a library and a manufacturing company have been established as the chosen scenarios for testing our hypotheses.
Current Work
Our current approaches focus on the following topics:
- Ethnographic analysis of a Sheltered Workshop (Büngern Technik) for people with mobility impairments.
- Understand modality choices under changing environmental conditions, as a potential approach for teleoperation.
- Evaluate the use of Augmented Reality by using Augmented Visual Cues for robot teleoperation and assisted teleoperation.



Publications
Arévalo-Arboleda, Stephanie; Ruecker, Franziska; Dierks, Tim; Gerken, Jens Assisting Manipulation and Grasping in Robot Teleoperation with Augmented Reality Visual Cues ConferenceForthcomingMIA CHI Conference on Human Factors in Computing Systems (CHI '21), Forthcoming, ISBN: 978-1-4503-8096-6/21/05. @conference{Arevalo-Arboleda2021, title = {Assisting Manipulation and Grasping in Robot Teleoperation with Augmented Reality Visual Cues}, author = {Stephanie Arévalo-Arboleda and Franziska Ruecker and Tim Dierks and Jens Gerken}, doi = {10.1145/3411764.3445398}, isbn = {978-1-4503-8096-6/21/05}, year = {2021}, date = {2021-05-03}, booktitle = {CHI Conference on Human Factors in Computing Systems (CHI '21)}, abstract = {Teleoperating industrial manipulators in co-located spaces can be challenging. Facilitating robot teleoperation by providing additional visual information about the environment and the robot affordances using augmented reality (AR), can improve task performance in manipulation and grasping. In this paper, we present two designs of augmented visual cues, that aim to enhance the visual space of the robot operator through hints about the position of the robot gripper in the workspace and in relation to the target. These visual cues aim to improve the distance perception and thus, the task performance. We evaluate both designs against a baseline in an experiment where participants teleoperate a robotic arm to perform pick-and-place tasks. Our results show performance improvements in different levels, reflecting in objective and subjective measures with trade-offs in terms of time, accuracy, and participants’ views of teleoperation. These findings show the potential of AR not only in teleoperation, but in understanding the human-robot workspace.}, keywords = {}, pubstate = {forthcoming}, tppubtype = {conference} } Teleoperating industrial manipulators in co-located spaces can be challenging. Facilitating robot teleoperation by providing additional visual information about the environment and the robot affordances using augmented reality (AR), can improve task performance in manipulation and grasping. In this paper, we present two designs of augmented visual cues, that aim to enhance the visual space of the robot operator through hints about the position of the robot gripper in the workspace and in relation to the target. These visual cues aim to improve the distance perception and thus, the task performance. We evaluate both designs against a baseline in an experiment where participants teleoperate a robotic arm to perform pick-and-place tasks. Our results show performance improvements in different levels, reflecting in objective and subjective measures with trade-offs in terms of time, accuracy, and participants’ views of teleoperation. These findings show the potential of AR not only in teleoperation, but in understanding the human-robot workspace. |
Arévalo-Arboleda, Stephanie; Pascher, Max; Lakhnati, Younes; Gerken, Jens Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis InproceedingsMIA RO-MAN 2020 - IEEE International Conference on Robot and Human Interactive Communication, 2020, ISBN: 978-1-7281-6075-7. @inproceedings{Arévalo-Arboleda2020b, title = {Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis}, author = {Stephanie Arévalo-Arboleda and Max Pascher and Younes Lakhnati and Jens Gerken}, url = {https://hci.w-hs.de/pub_understanding_hrc_ta/, PDF Download}, doi = {10.1109/RO-MAN47096.2020.9223489}, isbn = {978-1-7281-6075-7}, year = {2020}, date = {2020-07-31}, booktitle = {RO-MAN 2020 - IEEE International Conference on Robot and Human Interactive Communication}, abstract = {Assistive technologies, in particular human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a shelteredworkshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a Participatory Design technique (Future-Workshop). These stakeholders were divided into two groups, end-users and supervising personnel (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting into the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive humanrobot cooperative workstation for people with physical mobility impairments.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Assistive technologies, in particular human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a shelteredworkshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a Participatory Design technique (Future-Workshop). These stakeholders were divided into two groups, end-users and supervising personnel (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting into the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive humanrobot cooperative workstation for people with physical mobility impairments. |
Dierks, Tim Visual Cues: Integration of object pose recognition with an augmented reality system as means to support visual perception in human-robot control Masters ThesisMIA Westfälische Hochschule, 2020. @mastersthesis{Dierks2020, title = {Visual Cues: Integration of object pose recognition with an augmented reality system as means to support visual perception in human-robot control}, author = {Tim Dierks}, url = {https://hci.w-hs.de/pub_dierks_tim_masterthesis/, PDF Download}, year = {2020}, date = {2020-05-28}, address = {Neidenburger Straße 43, 45897 Gelsenkirchen}, school = {Westfälische Hochschule}, abstract = {Autonomy and self-determination are fundamental aspects of living in our society. Supporting people for whom this freedom is limited due to physical impairments is the fundamental goal of this thesis. Especially for people who are paralyzed, even working at a desk job is often not feasible. Therefore, in this thesis a prototype of a robot assembly workstation was constructed that utilizes a modern Augmented Reality (AR)-Head-Mounted Display (HMD) to control a robotic arm. Through the use of object pose recognition, the objects in the working environment are detected and this information is used to display different visual cues at the robotic arm or in its vicinity. Providing the users with additional depth information and helping them determine object relations, which are often not easily discernible from a fixed perspective. To achieve this a hands-free AR-based robot-control scheme was developed, which uses speech and head-movement for interaction. Additionally, multiple advanced visual cues were designed that utilize object pose detection for spatial-visual support. The pose recognition system is adapted from state-of-the-art research in computer vision to allow the detection of arbitrary objects with no regard for texture or shape. Two evaluations were performed, a small user study that excluded the object recognition, which confirms the general usability of the system and gives an impression on its performance. The participants were able to perform difficult pick and place tasks with a high success rate. Secondly, a technical evaluation of the object recognition system was conducted, which revealed an adequate prediction precision, but is too unreliable for real-world scenarios as the prediction quality is highly variable and depends on object orientations and occlusion.}, keywords = {}, pubstate = {published}, tppubtype = {mastersthesis} } Autonomy and self-determination are fundamental aspects of living in our society. Supporting people for whom this freedom is limited due to physical impairments is the fundamental goal of this thesis. Especially for people who are paralyzed, even working at a desk job is often not feasible. Therefore, in this thesis a prototype of a robot assembly workstation was constructed that utilizes a modern Augmented Reality (AR)-Head-Mounted Display (HMD) to control a robotic arm. Through the use of object pose recognition, the objects in the working environment are detected and this information is used to display different visual cues at the robotic arm or in its vicinity. Providing the users with additional depth information and helping them determine object relations, which are often not easily discernible from a fixed perspective. To achieve this a hands-free AR-based robot-control scheme was developed, which uses speech and head-movement for interaction. Additionally, multiple advanced visual cues were designed that utilize object pose detection for spatial-visual support. The pose recognition system is adapted from state-of-the-art research in computer vision to allow the detection of arbitrary objects with no regard for texture or shape. Two evaluations were performed, a small user study that excluded the object recognition, which confirms the general usability of the system and gives an impression on its performance. The participants were able to perform difficult pick and place tasks with a high success rate. Secondly, a technical evaluation of the object recognition system was conducted, which revealed an adequate prediction precision, but is too unreliable for real-world scenarios as the prediction quality is highly variable and depends on object orientations and occlusion. |
Ruecker, Franziska Visuelle Helfer: Ein Augmented Reality Prototyp zur Unterstützung der visuellen Wahrnehmung für die Steuerung eines Roboterarms Masters ThesisMIA Westfälische Hochschule, 2020. @mastersthesis{Ruecker2020, title = {Visuelle Helfer: Ein Augmented Reality Prototyp zur Unterstützung der visuellen Wahrnehmung für die Steuerung eines Roboterarms}, author = {Franziska Ruecker}, url = {https://hci.w-hs.de/pub_rueckermasterarbeit_komprimiert/, PDF Download}, year = {2020}, date = {2020-05-14}, address = {Neidenburger Straße 43, 45897 Gelsenkirchen}, school = {Westfälische Hochschule}, abstract = {Körperliche Behinderungen können einen Menschen soweit einschränken, dass für sie ein autonomes und selbstbestimmtes Leben, trotz intakter mentaler und kognitiven Fähigkeiten, nicht mehr möglich ist. Daher ist für Menschen, die beispielsweise vom Hals abwärts gelähmt sind, sogenannten Tetraplegikern, jede Zurückgewinnung von Autonomie eine Steigerung der Lebensqualität. In dieser Masterarbeit wird ein Augmented Reality Prototyp entwickelt, der es Tetraplegikern oder Menschen mit einer ähnlichen körperlichen Einschränkung erlaubt, an einem Mensch-Roboter Arbeitsplatz Montageaufgaben durchzuführen und ihnen somit eine Integration ins Arbeitsleben ermöglichen kann. Der Prototyp erlaubt es den Benutzer ohne die Nutzung der Hände, einen Kuka iiwa Roboterarm mit der Microsoft HoloLens zu steuern. Dabei wird ein Fokus darauf gelegt, das Blickfeld des Benutzers mit speziellen virtuellen Visualisierungen, sogenannten visuellen Helfern, anzureichern, um die Nachteile, die durch die Bewegungseinschränkungen der Zielgruppe ausgelöst werden, auszugleichen. Diese visuellen Helfer sollen bei der Steuerung des Roboterarms unterstützen und die Bedienung des Prototyps verbessern. Eine Evaluation des Prototyps zeigte Tendenzen, dass das Konzept der visuellen Helfer den Benutzer den Roboterarm präziser steuern lässt und seine Bedienung unterstützt.}, keywords = {}, pubstate = {published}, tppubtype = {mastersthesis} } Körperliche Behinderungen können einen Menschen soweit einschränken, dass für sie ein autonomes und selbstbestimmtes Leben, trotz intakter mentaler und kognitiven Fähigkeiten, nicht mehr möglich ist. Daher ist für Menschen, die beispielsweise vom Hals abwärts gelähmt sind, sogenannten Tetraplegikern, jede Zurückgewinnung von Autonomie eine Steigerung der Lebensqualität. In dieser Masterarbeit wird ein Augmented Reality Prototyp entwickelt, der es Tetraplegikern oder Menschen mit einer ähnlichen körperlichen Einschränkung erlaubt, an einem Mensch-Roboter Arbeitsplatz Montageaufgaben durchzuführen und ihnen somit eine Integration ins Arbeitsleben ermöglichen kann. Der Prototyp erlaubt es den Benutzer ohne die Nutzung der Hände, einen Kuka iiwa Roboterarm mit der Microsoft HoloLens zu steuern. Dabei wird ein Fokus darauf gelegt, das Blickfeld des Benutzers mit speziellen virtuellen Visualisierungen, sogenannten visuellen Helfern, anzureichern, um die Nachteile, die durch die Bewegungseinschränkungen der Zielgruppe ausgelöst werden, auszugleichen. Diese visuellen Helfer sollen bei der Steuerung des Roboterarms unterstützen und die Bedienung des Prototyps verbessern. Eine Evaluation des Prototyps zeigte Tendenzen, dass das Konzept der visuellen Helfer den Benutzer den Roboterarm präziser steuern lässt und seine Bedienung unterstützt. |
Arévalo-Arboleda, Stephanie; Dierks, Tim; Ruecker, Franziska; Gerken, Jens There’s More than Meets the Eye: Enhancing Robot Control through Augmented Visual Cues InproceedingsMIA HRI 2020 - ACM/IEEE International Conference on Human-Robot Interaction, 2020, ISBN: 978-1-4503-7057. @inproceedings{Arévalo-Arboleda2020, title = {There’s More than Meets the Eye: Enhancing Robot Control through Augmented Visual Cues}, author = {Stephanie Arévalo-Arboleda and Tim Dierks and Franziska Ruecker and Jens Gerken}, url = {https://hci.w-hs.de/pub_lbr1017_visualcues_arevalo_cameraready/, PDF Download}, doi = {10.1145/3371382.3378240}, isbn = {978-1-4503-7057}, year = {2020}, date = {2020-03-23}, booktitle = {HRI 2020 - ACM/IEEE International Conference on Human-Robot Interaction}, abstract = {In this paper, we present the design of a visual feedback mechanism using Augmented Reality, which we call augmented visual cues, to assist pick-and-place tasks during robot control. We propose to augment the robot operator’s visual space in order to avoid attention splitting and increase situational awareness (SA). In particular, we aim to improve on the SA concepts of perception, comprehension, and projection as well as the overall task performance. For that, we built upon the interaction design paradigm proposed by Walker et al.. On the one hand, our design augments the robot to support picking-tasks; and, on the other hand, we augment the environment to support placing-tasks. We evaluated our design in a first user study, and results point to specific design aspects that need improvement while showing promise for the overall approach, in particular regarding user satisfaction and certain SA concepts.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In this paper, we present the design of a visual feedback mechanism using Augmented Reality, which we call augmented visual cues, to assist pick-and-place tasks during robot control. We propose to augment the robot operator’s visual space in order to avoid attention splitting and increase situational awareness (SA). In particular, we aim to improve on the SA concepts of perception, comprehension, and projection as well as the overall task performance. For that, we built upon the interaction design paradigm proposed by Walker et al.. On the one hand, our design augments the robot to support picking-tasks; and, on the other hand, we augment the environment to support placing-tasks. We evaluated our design in a first user study, and results point to specific design aspects that need improvement while showing promise for the overall approach, in particular regarding user satisfaction and certain SA concepts. |
Arévalo-Arboleda, Stephanie; Miller, Stanislaw; Janka, Martha; Gerken, Jens What's behind a choice? Understanding Modality Choices under Changing Environmental Conditions InproceedingsMIA ICMI '19 2019 International Conference on Multimodal Interaction, S. 291-301, 2019, ISBN: 978-1-4503-6860-5. @inproceedings{Arévalo-Arboleda2019, title = {What's behind a choice? Understanding Modality Choices under Changing Environmental Conditions}, author = {Stephanie Arévalo-Arboleda and Stanislaw Miller and Martha Janka and Jens Gerken}, url = {https://hci.w-hs.de/pub_whatsbehindachoiceunderstandingmodalitychoicesunderchangingenvironmentalconditions/, PDF Download}, doi = {10.1145/3340555.3353717}, isbn = { 978-1-4503-6860-5}, year = {2019}, date = {2019-10-14}, booktitle = {ICMI '19 2019 International Conference on Multimodal Interaction}, pages = {291-301}, abstract = {Interacting with the physical and digital environment multimodally enhances user flexibility and adaptability to different scenarios. A body of research has focused on comparing the efficiency and effectiveness of different interaction modalities in digital environments. However, little is known about user behavior in an environment that provides freedom to choose from a range of modalities. That is why, we take a closer look at the factors that influence input modality choices. Building on the work by Jameson & Kristensson, our goal is to understand how different factors influence user choices. In this paper, we present a study that aims to explore modality choices in a hands-free interaction environment, wherein participants can choose and combine freely three hands-free modalities (Gaze, Head movements, Speech) to execute point and select actions in a 2D interface. On the one hand, our results show that users avoid switching modalities more often than we expected, particularly, under conditions that should prompt modality switching. On the other hand, when users make a modality switch, user characteristics and consequences of the experienced interaction have a higher impact on the choice, than the changes in environmental conditions. Further, when users switch between modalities, we identified different types of switching behaviors. Users who deliberately try to find and choose an optimal modality (single switcher), users who try to find optimal combinations of modalities (multiple switcher), and a switching behavior triggered by error occurrence (error biased switcher). We believe that these results help to further understand when and how to design for multimodal interaction in real-world systems.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Interacting with the physical and digital environment multimodally enhances user flexibility and adaptability to different scenarios. A body of research has focused on comparing the efficiency and effectiveness of different interaction modalities in digital environments. However, little is known about user behavior in an environment that provides freedom to choose from a range of modalities. That is why, we take a closer look at the factors that influence input modality choices. Building on the work by Jameson & Kristensson, our goal is to understand how different factors influence user choices. In this paper, we present a study that aims to explore modality choices in a hands-free interaction environment, wherein participants can choose and combine freely three hands-free modalities (Gaze, Head movements, Speech) to execute point and select actions in a 2D interface. On the one hand, our results show that users avoid switching modalities more often than we expected, particularly, under conditions that should prompt modality switching. On the other hand, when users make a modality switch, user characteristics and consequences of the experienced interaction have a higher impact on the choice, than the changes in environmental conditions. Further, when users switch between modalities, we identified different types of switching behaviors. Users who deliberately try to find and choose an optimal modality (single switcher), users who try to find optimal combinations of modalities (multiple switcher), and a switching behavior triggered by error occurrence (error biased switcher). We believe that these results help to further understand when and how to design for multimodal interaction in real-world systems. |
Wöhle, Lukas; Miller, Stanislaw; Gerken, Jens; Gebhard, Marion A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors InproceedingsMIA 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Rome, Italy, 2018. @inproceedings{Wöhle2018, title = {A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors}, author = {Lukas Wöhle and Stanislaw Miller and Jens Gerken and Marion Gebhard}, url = {https://hci.w-hs.de/pub_a_robust_interface_for_head_motion_based_control/, PDF Download}, doi = {10.1109/MeMeA.2018.8438699}, year = {2018}, date = {2018-06-13}, booktitle = {2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA)}, address = {Rome, Italy}, abstract = {Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction. |
Arévalo-Arboleda, Stephanie; Pascher, Max; Gerken, Jens Opportunities and Challenges in Mixed-Reality for an Inclusive Human-Robot Collaboration Environment InproceedingsMIA Proceedings of the 2018 International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) as part of the ACM/IEEE Conference on Human-Robot Interaction, S. 83–86, Chicago, USA, 2018. @inproceedings{Arboleda2018, title = {Opportunities and Challenges in Mixed-Reality for an Inclusive Human-Robot Collaboration Environment}, author = {Stephanie Arévalo-Arboleda and Max Pascher and Jens Gerken}, url = {https://hci.w-hs.de/pub_opportunities_and_challenges_in_mixed-reality_for_an_inclusive_human-robot_collaboration_environment/, PDF Download}, year = {2018}, date = {2018-01-01}, booktitle = {Proceedings of the 2018 International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) as part of the ACM/IEEE Conference on Human-Robot Interaction}, pages = {83--86}, address = {Chicago, USA}, abstract = {This paper presents an approach to enhance robot control using Mixed-Reality. It highlights the opportunities and challenges in the interaction design to achieve a Human-Robot Collaborative environment. In fact, Human-Robot Collaboration is the perfect space for social inclusion. It enables people, who suffer severe physical impairments, to interact with the environment by providing them movement control of an external robotic arm. Now, when discussing about robot control it is important to reduce the visual-split that different input and output modalities carry. Therefore, Mixed-Reality is of particular interest when trying to ease communication between humans and robotic systems.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } This paper presents an approach to enhance robot control using Mixed-Reality. It highlights the opportunities and challenges in the interaction design to achieve a Human-Robot Collaborative environment. In fact, Human-Robot Collaboration is the perfect space for social inclusion. It enables people, who suffer severe physical impairments, to interact with the environment by providing them movement control of an external robotic arm. Now, when discussing about robot control it is important to reduce the visual-split that different input and output modalities carry. Therefore, Mixed-Reality is of particular interest when trying to ease communication between humans and robotic systems. |
Information about the project (FKZ: 13FH011IX6)
Coordination
Westfälische Hochschule Gelsenkirchen Bocholt Recklinghausen, Gelsenkirchen
Partners
- Arbeitsgruppe "Sensortechnik und Aktorik" der Westfälischen Hochschule
- Institut für Automatisierungstechnik der Universität Bremen
- Arbeitsgruppe Interaktive Systeme der Universität Duisburg-Essen
- Rehavista GmbH
- Büngern-Technik
- pi4 robotics GmbH, Berlin
- IAT Gelsenkirchen
Volume
0,76 Mio. EUR (100 % supported by BMBF)
Duration
08/2017 - 06/2021
