MIA
Human-Robot Interaction at the Workplace
(Mensch-Roboter Interaktion im Arbeitsleben bewegungseingeschränkter Personen)
Motivation
Human-robot workplaces where people and robots work together cooperatively are part of the industry of tomorrow. This industry integrates new services, in the so called, "Workplace as a Service" wherein each service can be taken care individually. In a workplace setting people have their hands already occupied in different tasks; leaving room for exploring new communication and interaction technologies, which may include a robotic system. Moreover, people with disabilities may benefit from these type of technologies, since they will increase their integration into the work market.
Goals and Approach
In the MIA research project, innovative sensor technologies and interaction designs are being developed in order to make the complex robot control manageable for people who are able to move their heads and eyes. We intend to use different technologies such as: inertial measurement units (IMU), eye tracking or electrooculography (EOG), as well as provide feedback through augmented reality. In this context, our research is oriented towards testing and evaluating new concepts about robot control and interaction possibilities for humans.
Innovations and Perspectives
The research results will enable the design of a new collaborative human-robot workplace. Since this research is supported by empirical studies, a library and a manufacturing company have been established as the chosen scenarios for testing our hypotheses.
Current Work
Our current approaches focus on the following topics:
- Ethnographic analysis of a Sheltered Workshop (Büngern Technik) for people with mobility impairments.
- Understand modality choices under changing environmental conditions, as a potential approach for teleoperation.
- Evaluate the use of Augmented Reality by using Augmented Visual Cues for robot teleoperation and assisted teleoperation.
Publications
Arévalo-Arboleda, Stephanie Towards a Human-Robot Interaction Design for People with Motor Disabilities by Enhancing the Visual Space PromotionsarbeitMIA 2022. Abstract | BibTeX | Tags: assistive robotics, augmented reality | Links: @phdthesis{Arévalo-Arboleda2022b, People with motor disabilities experience several physical limitations that affect not only their activities of daily living but their integration into the labor market. Human-Robot Collaboration presents opportunities to enhance human capabilities and counters physical limitations through different interaction paradigms and technological devices. However, little is known about the needs, expectations, and perspectives of people with motor disabilities within a human-robot collaborative work environment. In this thesis, we aim to shed light on the perspectives of people with motor disabilities when designing a teleoperation concept that could enable them to perform manipulation tasks in a manufacturing environment. First, we provide the concerns of different people with motor disabilities, social workers, and caregivers about including a collaborative robotic arm in assembly lines. Second, we identify specific opportunities and potential challenges in hands-free interaction design for robot control. Third, we present a multimodal hands-free interaction for robot control that uses augmented reality to display the user interface. On top of that, we propose a feedback concept that provides augmented visual cues to aid robot operators in gaining a better perception of the location of the objects in the workspace and improve performance in pick-and-place tasks. We present our contributions through six studies with people with and without disabilities, and the empirical findings are reported in eight publications. Publications I, II, and IV aim to extend the research efforts of designing human-robot collaborative spaces for people with motor disabilities. Publication III sheds a light on the reasoning for hands-free modality choices and Publication VIII evaluates a hands-free teleoperation concept with an individual with motor disabilities. Publications V - VIII explore augmented reality to present a user interface that facilitates hands-free robot control and uses augmented visual cues to address depth perception issues improving thus performance in pick-and-place tasks. Our findings can be summarized as follows. We point out concerns grouped into three themes: the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. Additionally, we provide five lessons learned derived from the pragmatic use of participatory design for people with motor disabilities, (1) approach participants through different channels and allow for multidisciplinarity in the research team, (2) consider the relationship between social dependencies in the selection of a participatory design technique, (3) plan for early exposure to robots and other technology, (4) take into account all opinions in design sessions, and (5) acknowledge that ethical implications go beyond consent. Also, we introduce findings about the nature of modality choices in hands-free interaction, which point to the user’s own abilities and individual experiences as determining factors in interaction evaluation. Finally, we present and evaluate a possible hands-free multimodal interaction design for robot control using augmented reality and augmented visual cues. We propose that augmented visual cues can improve depth perception and performance in pick-and-place tasks. Thus, we evaluated our designs of visual cues by taking into account depth-related variables (target’s distance and pose) and subjective certainty. Our results highlight that shorter distances and a clear pose lead to higher success, faster grasping time, and higher certainty. In addition, we re-designed our augmented visual cues considering visualization techniques and monocular cues that could be used to enhance the visual space for robot teleoperation. Our results demonstrate that our augmented visual cues can assist robot control and increase accuracy in pick-and-place tasks. In conclusion, our findings on people with motor disabilities in a human-robot collaborative workplace, a hands-free multimodal interaction design, and augmented visual cues can extend the knowledge about using mixed reality in human-robot interaction. Further, these contributions have the potential to promote future research to design inclusive environments for people with disabilities. |
Arévalo-Arboleda, Stephanie; Becker, Marvin; Gerken, Jens Does One Size Fit All? A Case Study to Discuss Findings of an Augmented Hands-Free Robot Teleoperation Concept for People with and without Motor Disabilities ArtikelMIA In: Technologies, 10 (1), 2022, ISSN: 2227-7080. Abstract | BibTeX | Tags: augmented reality, case study, hands-free interaction, learning points, people with motor disabilities, robot teleoperation | Links: @article{Arévalo-Arboleda2022, Hands-free robot teleoperation and augmented reality have the potential to create an inclusive environment for people with motor disabilities. It may allow them to teleoperate robotic arms to manipulate objects. However, the experiences evoked by the same teleoperation concept and augmented reality can vary significantly for people with motor disabilities compared to those without disabilities. In this paper, we report the experiences of Miss L., a person with multiple sclerosis, when teleoperating a robotic arm in a hands-free multimodal manner using a virtual menu and visual hints presented through the Microsoft HoloLens 2. We discuss our findings and compare her experiences to those of people without disabilities using the same teleoperation concept. Additionally, we present three learning points from comparing these experiences: a re-evaluation of the metrics used to measure performance, being aware of the bias, and considering variability in abilities, which evokes different experiences. We consider these learning points can be extrapolated to carrying human–robot interaction evaluations with mixed groups of participants with and without disabilities. |
Arévalo-Arboleda, Stephanie; Dierks, Tim; Ruecker, Franziska; Gerken, Jens Exploring the Visual Space to Improve Depth Perception in Robot Teleoperation using Augmented Reality: The Role of Distance and Target’s Pose in Time, Success, and Certainty KonferenzbeitragMIA In: Rosa Lanzilotti Carmelo Ardito, Alessio Malizia (Hrsg.): Human-Computer Interaction – INTERACT 2021, Springer, Cham, 2021. Abstract | BibTeX | Tags: augmented reality, depth perception, human-robot interaction, user study | Links: @inproceedings{Arévalo-Arboleda2021c, Accurate depth perception in co-located teleoperation has the potential to improve task performance in manipulation and grasping tasks. We thus explore the operator's visual space and design visual cues using augmented reality that aim to facilitate the positioning of the gripper above a target object before attempting to grasp it. The designs we propose include a virtual circle (Circle), virtual extensions (Extensions) from the gripper's fingers, and a color matching design using a real colormap with matching colored virtual circles (Colors). We conducted an experiment to evaluate these designs and the influence of distance from the operator to the workspace and the target object's pose. Here, we report on time, success, and perceived certainty in a grasping task. Our results show that a shorter distance leads to higher success, faster grasping time, and higher certainty. Concerning the target object's pose, a clear pose leads to higher success and certainty but interestingly slower task times. Regarding the design of cues, our results reveal that the simplicity of the Circle cue leads to the highest success and outperforms the most complex cue Colors also for task time, while the level of certainty seems to be depending more on the distance than the type of cue. We consider that our results can serve as an initial analysis to further explore these factors both when designing to improve depth perception and within the context of co-located teleoperation. |
Arévalo-Arboleda, Stephanie; Pascher, Max; Baumeister, Annalies; Klein, Barbara; Gerken, Jens Reflecting upon Participatory Design in Human-Robot Collaboration for People with Motor Disabilities: Challenges and Lessons Learned from Three Multiyear Projects KonferenzbeitragMIA In: The 14th PErvasive Technologies Related to Assistive Environments Conference - PETRA 2021, ACM 2021, ISBN: 978-1-4503-8792-7/21/06. Abstract | BibTeX | Tags: accessibility design, human-robot collaboration, lessons learned, participatory design | Links: @inproceedings{Arévalo-Arboleda2021b, Human-robot technology has the potential to positively impact the lives of people with motor disabilities. However, current efforts have mostly been oriented towards technology (sensors, devices, modalities, interaction techniques), thus relegating the user and their valuable input to the wayside. In this paper, we aim to present a holistic perspective of the role of participatory design in Human-Robot Collaboration (HRC) for People with Motor Disabilities (PWMD). We have been involved in several multiyear projects related to HRC for PWMD, where we encountered different challenges related to planning and participation, preferences of stakeholders, using certain participatory design techniques, technology exposure, as well as ethical, legal, and social implications. These challenges helped us provide five lessons learned that could serve as a guideline to researchers when using participatory design with vulnerable groups. In particular, young researchers who are starting to explore HRC research for people with disabilities. |
Arévalo-Arboleda, Stephanie; Ruecker, Franziska; Dierks, Tim; Gerken, Jens Assisting Manipulation and Grasping in Robot Teleoperation with Augmented Reality Visual Cues KonferenzbeitragMIA In: CHI Conference on Human Factors in Computing Systems (CHI '21), ACM, 2021, ISBN: 978-1-4503-8096-6/21/05. Abstract | BibTeX | Tags: augmented reality, depth perception, hands-free interaction, human-robot interaction, teleoperation, visual cues | Links: @inproceedings{Arevalo-Arboleda2021, Teleoperating industrial manipulators in co-located spaces can be challenging. Facilitating robot teleoperation by providing additional visual information about the environment and the robot affordances using augmented reality (AR), can improve task performance in manipulation and grasping. In this paper, we present two designs of augmented visual cues, that aim to enhance the visual space of the robot operator through hints about the position of the robot gripper in the workspace and in relation to the target. These visual cues aim to improve the distance perception and thus, the task performance. We evaluate both designs against a baseline in an experiment where participants teleoperate a robotic arm to perform pick-and-place tasks. Our results show performance improvements in different levels, reflecting in objective and subjective measures with trade-offs in terms of time, accuracy, and participants’ views of teleoperation. These findings show the potential of AR not only in teleoperation, but in understanding the human-robot workspace. |
Arévalo-Arboleda, Stephanie; Pascher, Max; Lakhnati, Younes; Gerken, Jens Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis KonferenzbeitragMIA In: RO-MAN 2020 - IEEE International Conference on Robot and Human Interactive Communication, IEEE, 2020, ISBN: 978-1-7281-6075-7. Abstract | BibTeX | Tags: assistive robotics, creating human-robot relationships, hri and collaboration in manufacturing environments | Links: @inproceedings{Arévalo-Arboleda2020b, Assistive technologies, in particular human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a shelteredworkshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a Participatory Design technique (Future-Workshop). These stakeholders were divided into two groups, end-users and supervising personnel (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting into the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive humanrobot cooperative workstation for people with physical mobility impairments. |
Dierks, Tim Visual Cues: Integration of object pose recognition with an augmented reality system as means to support visual perception in human-robot control AbschlussarbeitMIA Westfälische Hochschule, Neidenburger Straße 43, 45897 Gelsenkirchen, 2020. Abstract | BibTeX | Tags: augmented reality, hands-free interaction, human-robot interaction, pose recognition | Links: @mastersthesis{Dierks2020, Autonomy and self-determination are fundamental aspects of living in our society. Supporting people for whom this freedom is limited due to physical impairments is the fundamental goal of this thesis. Especially for people who are paralyzed, even working at a desk job is often not feasible. Therefore, in this thesis a prototype of a robot assembly workstation was constructed that utilizes a modern Augmented Reality (AR)-Head-Mounted Display (HMD) to control a robotic arm. Through the use of object pose recognition, the objects in the working environment are detected and this information is used to display different visual cues at the robotic arm or in its vicinity. Providing the users with additional depth information and helping them determine object relations, which are often not easily discernible from a fixed perspective. To achieve this a hands-free AR-based robot-control scheme was developed, which uses speech and head-movement for interaction. Additionally, multiple advanced visual cues were designed that utilize object pose detection for spatial-visual support. The pose recognition system is adapted from state-of-the-art research in computer vision to allow the detection of arbitrary objects with no regard for texture or shape. Two evaluations were performed, a small user study that excluded the object recognition, which confirms the general usability of the system and gives an impression on its performance. The participants were able to perform difficult pick and place tasks with a high success rate. Secondly, a technical evaluation of the object recognition system was conducted, which revealed an adequate prediction precision, but is too unreliable for real-world scenarios as the prediction quality is highly variable and depends on object orientations and occlusion. |
Ruecker, Franziska Visuelle Helfer: Ein Augmented Reality Prototyp zur Unterstützung der visuellen Wahrnehmung für die Steuerung eines Roboterarms AbschlussarbeitMIA Westfälische Hochschule, Neidenburger Straße 43, 45897 Gelsenkirchen, 2020. Abstract | BibTeX | Tags: augmented reality, evaluation, hands-free interaction, human-robot interaction | Links: @mastersthesis{Ruecker2020, Körperliche Behinderungen können einen Menschen soweit einschränken, dass für sie ein autonomes und selbstbestimmtes Leben, trotz intakter mentaler und kognitiven Fähigkeiten, nicht mehr möglich ist. Daher ist für Menschen, die beispielsweise vom Hals abwärts gelähmt sind, sogenannten Tetraplegikern, jede Zurückgewinnung von Autonomie eine Steigerung der Lebensqualität. In dieser Masterarbeit wird ein Augmented Reality Prototyp entwickelt, der es Tetraplegikern oder Menschen mit einer ähnlichen körperlichen Einschränkung erlaubt, an einem Mensch-Roboter Arbeitsplatz Montageaufgaben durchzuführen und ihnen somit eine Integration ins Arbeitsleben ermöglichen kann. Der Prototyp erlaubt es den Benutzer ohne die Nutzung der Hände, einen Kuka iiwa Roboterarm mit der Microsoft HoloLens zu steuern. Dabei wird ein Fokus darauf gelegt, das Blickfeld des Benutzers mit speziellen virtuellen Visualisierungen, sogenannten visuellen Helfern, anzureichern, um die Nachteile, die durch die Bewegungseinschränkungen der Zielgruppe ausgelöst werden, auszugleichen. Diese visuellen Helfer sollen bei der Steuerung des Roboterarms unterstützen und die Bedienung des Prototyps verbessern. Eine Evaluation des Prototyps zeigte Tendenzen, dass das Konzept der visuellen Helfer den Benutzer den Roboterarm präziser steuern lässt und seine Bedienung unterstützt. |
Arévalo-Arboleda, Stephanie; Dierks, Tim; Ruecker, Franziska; Gerken, Jens There’s More than Meets the Eye: Enhancing Robot Control through Augmented Visual Cues KonferenzbeitragMIA In: HRI 2020 - ACM/IEEE International Conference on Human-Robot Interaction, 2020, ISBN: 978-1-4503-7057. Abstract | BibTeX | Tags: augmented reality, human-robot interaction, visualization | Links: @inproceedings{Arévalo-Arboleda2020, In this paper, we present the design of a visual feedback mechanism using Augmented Reality, which we call augmented visual cues, to assist pick-and-place tasks during robot control. We propose to augment the robot operator’s visual space in order to avoid attention splitting and increase situational awareness (SA). In particular, we aim to improve on the SA concepts of perception, comprehension, and projection as well as the overall task performance. For that, we built upon the interaction design paradigm proposed by Walker et al.. On the one hand, our design augments the robot to support picking-tasks; and, on the other hand, we augment the environment to support placing-tasks. We evaluated our design in a first user study, and results point to specific design aspects that need improvement while showing promise for the overall approach, in particular regarding user satisfaction and certain SA concepts. |
Arévalo-Arboleda, Stephanie; Miller, Stanislaw; Janka, Martha; Gerken, Jens What's behind a choice? Understanding Modality Choices under Changing Environmental Conditions KonferenzbeitragMIA In: ICMI '19 2019 International Conference on Multimodal Interaction, S. 291-301, 2019, ISBN: 978-1-4503-6860-5. Abstract | BibTeX | Tags: hands-free interaction, modality choices, multimodality, point and select | Links: @inproceedings{Arévalo-Arboleda2019, Interacting with the physical and digital environment multimodally enhances user flexibility and adaptability to different scenarios. A body of research has focused on comparing the efficiency and effectiveness of different interaction modalities in digital environments. However, little is known about user behavior in an environment that provides freedom to choose from a range of modalities. That is why, we take a closer look at the factors that influence input modality choices. Building on the work by Jameson & Kristensson, our goal is to understand how different factors influence user choices. In this paper, we present a study that aims to explore modality choices in a hands-free interaction environment, wherein participants can choose and combine freely three hands-free modalities (Gaze, Head movements, Speech) to execute point and select actions in a 2D interface. On the one hand, our results show that users avoid switching modalities more often than we expected, particularly, under conditions that should prompt modality switching. On the other hand, when users make a modality switch, user characteristics and consequences of the experienced interaction have a higher impact on the choice, than the changes in environmental conditions. Further, when users switch between modalities, we identified different types of switching behaviors. Users who deliberately try to find and choose an optimal modality (single switcher), users who try to find optimal combinations of modalities (multiple switcher), and a switching behavior triggered by error occurrence (error biased switcher). We believe that these results help to further understand when and how to design for multimodal interaction in real-world systems. |
Wöhle, Lukas; Miller, Stanislaw; Gerken, Jens; Gebhard, Marion A Robust Interface for Head Motion based Control of a Robot Arm using MARG and Visual Sensors KonferenzbeitragMIA In: 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Rome, Italy, 2018. Abstract | BibTeX | Tags: hybrid sensor system, kalman filter, magnetic immune, orientation, sensor fusion, state machine | Links: @inproceedings{Wöhle2018, Head-controlled human machine interfaces have gained popularity over the past years, especially in the restoration of the autonomy of severely disabled people, like tetraplegics. These interfaces need to be reliable and robust regarding the environmental conditions to guarantee safety of the user and enable a direct interaction between a human and a machine. This paper presents a hybrid MARG and visual sensor system for head orientation estimation which is in this case used to teleoperate a robotic arm. The system contains a Magnetic Angular Rate Gravity (MARG)-sensor and a Tobii eye tracker 4C. A MARG sensor consists of tri-axis accelerometer, gyroscope as well as a magnetometer which enable a complete measurement of orientation relative to the direction of gravity and magnetic field of the earth. The tri-axis magnetometer is sensitive to external magnetic fields which result in incorrect orientation estimation from the sensor fusion process. In this work the Tobii eye tracker 4C is used to increase head orientation estimation because it also features head tracking even though it is commonly used for eye tracking. This type of visual sensor does not suffer magnetic drift. However, it computes orientation data only, if a user is detectable. Within this work a state machine is presented which enables data fusion of the MARG and visual sensor to improve orientation estimation. The fusion of the orientation data of MARG and visual sensors enables a robust interface, which is immune against external magnetic fields. Therefore, it increases the safety of the human machine interaction. |
Arévalo-Arboleda, Stephanie; Pascher, Max; Gerken, Jens Opportunities and Challenges in Mixed-Reality for an Inclusive Human-Robot Collaboration Environment KonferenzbeitragMIA In: Proceedings of the 2018 International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) as part of the ACM/IEEE Conference on Human-Robot Interaction, S. 83–86, Chicago, USA, 2018. Abstract | BibTeX | Tags: human-robot collaboration, mixed-reality, robot control, severe motor impaired | Links: @inproceedings{Arboleda2018, This paper presents an approach to enhance robot control using Mixed-Reality. It highlights the opportunities and challenges in the interaction design to achieve a Human-Robot Collaborative environment. In fact, Human-Robot Collaboration is the perfect space for social inclusion. It enables people, who suffer severe physical impairments, to interact with the environment by providing them movement control of an external robotic arm. Now, when discussing about robot control it is important to reduce the visual-split that different input and output modalities carry. Therefore, Mixed-Reality is of particular interest when trying to ease communication between humans and robotic systems. |
Information about the project (FKZ: 13FH011IX6)
Coordination
Westfälische Hochschule Gelsenkirchen Bocholt Recklinghausen, Gelsenkirchen
Partners
- Arbeitsgruppe "Sensortechnik und Aktorik" der Westfälischen Hochschule
- Institut für Automatisierungstechnik der Universität Bremen
- Arbeitsgruppe Interaktive Systeme der Universität Duisburg-Essen
- Rehavista GmbH
- Büngern-Technik
- pi4 robotics GmbH, Berlin
- IAT Gelsenkirchen
Volume
0,76 Mio. EUR (100 % supported by BMBF)
Duration
08/2017 - 06/2021