Abstracts Track 2023


Area 1 - Human Factors for Interactive Systems, Research, and Applications

Nr: 9
Title:

CPR Assistance in Mixed-Reality

Authors:

Krzysztof Pietroszek

Abstract: We design and evaluate a mixed reality real-time communication system for remote assistance during CPR emergencies. Our system allows an expert to guide a first responder, remotely, on how to give first aid. RGBD cameras capture a volumetric view of the local scene including the patient, the first responder, and the environment. The volumetric capture is augmented onto the remote expert's view to spatially guide the first responder using visual and verbal instructions. We evaluate the mixed reality communication system in a research study in which participants face a simulated emergency. The first responder moves the patient to the recovery position and performs chest compressions as well as mouth-to-mask ventilation. Our RGBD cameras allow for three-dimensional visual information that helps the user through the steps. Within this project, we conducted an evaluation with 30 participants separated into two groups. Both groups were given the same tasks. They had to give first aid to a lifeless person, bring the person into the recovery position and start with CPR after the person stopped breathing. We compared instruction via the mixed reality (MR) approach (group A) with video-based communication (group B). We analyzed objective metrics of CPR quality recorded by the CPR mannequin and data from users including workload surveys and interviews. Our main contributions as follows: 1. We introduce an MR communication system designed for remote first aid assistance. 2. We conducted a comparison between MR communication technology and video-based communication. 3. The project team measured workload and performance when giving assisted first aid in MR and videoconferencing. Our study compares mixed reality against videoconferencing-based assistance using CPR performance measures, cognitive workload surveys, and semi-structured interviews. We find that more visual communication including gestures and objects is used by the remote expert when assisting in mixed reality compared to videoconferencing. Moreover, the performance and the workload of the first responder during simulation do not differ significantly between the two technologies.

Area 2 - Interaction Design

Nr: 63
Title:

Development of Feasible GUI Input Elements for Smartphone Cross-Haptic Handling by Pilots While Steering Small Planes

Authors:

Hans Weghorn

Abstract: Smartphone technology has evolved over the last decades various GUI input and output elements, which were rooted back into the very early developments of computing based on the concept ideas of transferring switches, push buttons, selectors, and wheels from electromechanic panels for a display on RGB screens. The standard elements known from GUI libraries like X11 or TCL/TK were also reflected into smartphone programming features. In smartphone handling, human users typically hold the devices with one hand and apply input controls on a touch screen surface with fingers of the other hand. New ideas for haptic actions came up like two-finger stretching and single-finger wiping, which even is used with acceleration and slow-down effects. Due to the physical small screen size, which is quite natural for devices that should be carried in small pockets, finger touches require certain precision on the display surface. Especially when a choice between several top-level elements is desirable, the input elements appear even smaller. When haptic precision is not possible, such concepts clearly do not work well. In the research described here, the handling environment of smartphones as assistive tool for small and sporting plane pilots is focused. This kind of environment doesn't enable pilots to use two hands for a smartphone at all, since one hand is always occupied with holding the main control stick or yoke, often it is not even practicable to look at the smartphone screen while applying some input control. During operation and steering of planes the continuously varying and unpredictable side and shaking forces prevent reliable precision in touching input elements on any smartphone screen. Fundamental potential analysis yields two basic input modes without visual trace: (a) either one single finger without further guidance touches the input screen anywhere, or (b) one hand embraces the smartphone housing and allows more particular touch down direction of its forefinger on the screen area. In both cases (a) and (b), the finger action can be controlled as short significant touch, clear long press, or knocking either with a certain number of counts or even by applying a certain tapping rhythm. While the finger touching position on 2D screen surface arises relatively random for (a), tapping orientation on screen is possible to a certain extent in (b) since the hand provides a feeling of the screen limits. According to practical experience it is possible to split the screen surface logically into two halves - up and down or left and right - or four quadrants through reference to the corners - while landing the input finger action with some haptic coordination. Also finger wipes and circular movements with alternative linear or rotational direction will work reliably for (b). Summing up all these modes, a high number of distinguishable single finger input events are available, when handling the device blindly. These concepts were derived during the recent years from development and field testing of smartphone tools, which shall efficiently assist pilots of small planes in navigation and flight logging especially in phases of high workload. The supplement video shows, e.g., how zoom in/out for a moving map software can be commanded blindly by single finger input action with long press and quick tapping respectively. More cross-haptic input elements are used in these pilot apps, which are still being improved further.