Abstracts Track 2025


Area 1 - Human Factors for Interactive Systems, Research, and Applications

Nr: 83
Title:

Limited Field of View Is not the Cause of Distance Underestimation, also in Real Environments

Authors:

Filip Kowalik, Jacek Matulewski and Jacek Matulewski

Abstract: The depth perception in real, virtual, and augmented environments, considering factors like field of view limitation (FoVL), is still the subject of research because of its importance to augmented and virtual reality (AR and VR) interfaces. We aimed to resolve the discrepancy between the findings presented in [1] and [2] regarding the impact of FoVL on distance estimation ability. First study suggests that FoVL leads to depth perception disorders resulting from limited visual cues, while the latter indicates that FoVL is of little significance. These discrepancies may be due to the different methodologies used in both experiments, such as the use of devices with different parameters, different experimental conditions, different scenes (static or dynamic environments, indoors and outdoors), and, above all, the method of depth assessment (blind walking vs. verbal assessment). In a very straightforward experiment, we tested how FoVL affects depth perception and, consequently, the ability to judge the distance to objects. To simplify it, we got rid of the virtual environment by conducting the ex-periment in an 8.3x6 m room using physical stimuli placed in various distances and using the goggles without electronics, but only with shutters leaving circular holes with a diameter of 5.5 cm, 4.10 cm, 2.75 cm, and 1.4 cm, without imitation of AR helmet inertia [1]. Thus, although the research question stems from issues related to AR, the experiment finally focused on human perception rather than technology. The results obtained are fundamentally different from those from [1] and are closer to those from [2]. In [1] distances, measured by walking, were most underestimated by about 10.4% compared to actual distances, and in the case of triangulated walking by 19.3%. It is significantly more than in the current experiment, in which the average relative error calculated verbally was 5.5% and by walking was 5.2%. In the [2], the indicated distance was 94.4% (walking) and 98.8% (verbal) of the actual distance, which are similar to values obtained in the current article, i.e., respectively 94.8% and 94.5%. We also compare the results with those obtained in our laboratory using AR goggles [3]. 1. Willemsen P., Colton M., Creem-Regehr S., & Thomson W. The effects of head-mounted display mechanical properties and field-of-view on distance judgments in virtual environments. ACM Transactions on Applied Perception. 6(2). 1-15. (2009) 2. Knapp, J. M., & Loomis, J. M. Limited field of view of head-mounted displays is not the cause of distance underestimation in virtual environments. Presence: Teleoperators & Virtual Environments, 13(5), 572-577. (2004) 3. Łukasik A., Matulewski J., Karkowska K., Grzankowska I., Joachimiak M., Pietrykow-ski D., & Sztramski M. (2024). Do we need to squeeze large tele-AR scenes to fit them into a small room?. Procedia Computer Science, 246, 3371-3380.

Nr: 121
Title:

Dissecting the Uncanny Valley: Perceptual and Social-Cognitive Mechanisms

Authors:

Dawid Ratajczyk

Abstract: As artificial agents increasingly enter social and service contexts, the uncanny valley presents a persistent challenge in human–computer interaction. It refers to a drop in comfort or affinity people experience when an artificial agent appears almost—but not fully—human. Research shows that the uncanny valley is multidimensional, encompassing visual, behavioural, and mental aspects of humanlikeness (Diel et al., 2021). However, conflating these dimensions can blur the distinct mechanisms involved, contributing to inconsistent findings. For example, studies suggest the existence of two distinct dips in affinity—one for moderately and another for highly humanlike robots—indicating that separate processes may be responsible (Kim et al., 2022). I argue that this effect has two semi-independent roots: a perceptual uncanny valley and a social-cognitive uncanny valley. The perceptual component arises from the sensitivity of human visual processing to deviations in facial structure. Subtle violations of canonical facial geometry—such as misaligned eyes, unnatural skin texture, or disproportionate feature spacing—disrupt configural processing and trigger immediate negative affect. Because similar aversive responses are observed even with distorted photographs of real humans, these effects appear to reflect domain-general mechanisms rather than reactions specific to artificial agents. The social-cognitive component is qualitatively different. Here, discomfort is driven by learned social scripts, cultural narratives, and intergroup categorization. Robots and advanced AI systems occupy an ambiguous social status: neither mere tools nor fully accepted social partners. Historical associations—such as the origin of the word robot, from the Czech robota, meaning forced labor—together with popular narratives of machine rebellion contribute to persistent anxieties about dominance, autonomy, and human replacement. In this talk, I will outline a conceptual distinction between the perceptual and social-cognitive components of the uncanny valley. Making this distinction explicit can help resolve theoretical ambiguities and improve empirical designs. Diel, A., Weigelt, S., & Macdorman, K. F. (2021). A meta-analysis of the uncanny valley's independent and dependent variables. ACM Transactions on Human-Robot Interaction (THRI), 11(1), 1-33. Kim, B., de Visser, E., & Phillips, E. (2022). Two uncanny valleys: Re-evaluating the uncanny valley across the full spectrum of real-world human-like robots. Computers in Human Behavior, 135, 107340.