Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Virtual Reality: Taking Performance to the Next Level
Cathy Craig, Ulster University/INCISIV Ltd., United Kingdom

Human Control in Daily Environment Automations
Fabio Paternò, Human Interfaces in Information Systems Laboratory, CNR-ISTI, Italy

Towards a Meaningful Robot-Assisted Neurorehabilitation Experience
Laura Marchal-Crespo, Department Cognitive Robotics, TU Delft, Netherlands

 

Virtual Reality: Taking Performance to the Next Level

Cathy Craig
Ulster University/INCISIV Ltd.
United Kingdom
https://pure.ulster.ac.uk/en/persons/cathy-craig
 

Brief Bio
Cathy Craig is a professor of Experimental Psychology at Ulster University. Over the last 20 years she has been developing a brand of analytics to unlock the secrets of why we move, how we move and why sometimes we can’t. She was the first in the world to use virtual reality technology to control what the brain sees and measure how the brains responds. She has worked with elite athletes in many different sports (soccer, handball, cricket and rugby) but also children with autism, older adults and people with Parkinson’s. She has a PhD from the University of Edinburgh, an Habilitation (HDR) from the University of Aix-Marseille, France and over 100 scientific publications. In the early 2000s, she made history as the first person in the world to use virtual reality to understand why curved free kicks in soccer — like those famously taken by David Beckham and Roberto Carlos, where the ball bends dramatically on its way to the goal — are harder for goalkeepers to stop. Fuelled by a desire to make her research a reality, Cathy founded INCISIV in 2018, a Belfast-based company that uses the power of VR gameplay to improve sports performance. Her team’s flagship VR app, CleanSheet Soccer, launched on the Meta Quest store in October 2023 and is currently used by over 150,000 users across the world. It is the most advanced goalkeeping training tool available, blending cutting-edge research with Meta's revolutionary VR technology. While CleanSheet focuses on intercepting and stopping shots, her team’s next title, DodgeCraft, launching in early December, will take fun fitness to a whole new level. This new VR experience centres on evasive manoeuvres—helping players master the art of dodging in an immersive, high-energy environment. INCISIV has also developed MOVIR a head injury tool to independently assess player’s neural fitness (AQ) before and after a concussion, and works with researchers and physiotherapists to develop exercises in MOViR to accelerate both physical and neural rehabilitation. She has been using this technology with hundreds of athletes over the last 3 years.


Abstract
This talk will demonstrate how Virtual Reality (VR) can be used as a tool to understand and improve movement performance. The first part will show how rudimentary VR technology was used in the early 2000s to carefully control what the brain sees (perception), but also very accurately measure how the brain responds (action). The versatility of VR means it can be used to study human behaviour in many different sport and health applications. Examples from behavioural neuroscience will showcase how VR can help us understand decision-making in elite sport but also conditions such as freezing of gait in people with Parkinson’s disease.
The second part of the talk will highlight how the recent evolution of both VR hardware and software has opened exciting new possibilities to take research out of the lab so it can make a difference to people’s lives. Examples will demonstrate how VR applications that are commercially available can enhance performance through the power of gameplay. This could be VR apps that train perceptuo-motor skills in the home or monitor changes in players’ neural fitness that can occur because of injuries (e.g. concussions). 
The talk will conclude by sharing some thoughts on the future of VR technology and the Metaverse and highlight opportunities for researchers to take advantage of this technology.



 

 

Human Control in Daily Environment Automations

Fabio Paternò
Human Interfaces in Information Systems Laboratory, CNR-ISTI
Italy
http://hiis.isti.cnr.it/Users/Fabio/index.html
 

Brief Bio
Fabio Paternò is a Research Director at the C.N.R.-ISTI in Pisa, where he coordinates the Laboratory on Human Interfaces in Information Systems (HIIS). His research focuses mainly on the field of human-computer interaction (HCI), in which he was one of the pioneers in Italy, with the aim of introducing computational support to improve usability, accessibility and user experience in various contexts. He has contributed significantly to areas such as model-based design of interactive applications, methods and tools for automatically supporting usability evaluation, accessibility, cross-device and migratory user interfaces, and end-user development. His research has always aimed at deepening and intertwining both theoretical and practical innovative aspects, with attention to possible social implications. Fabio is passionate about research that can have an impact that responds to the needs of society and is conducted in multidisciplinary environments. He has led numerous national and international interdisciplinary projects, always aiming to integrate different perspectives for effective solutions. The results are reported in over three hundred publications in international conferences, books and journals. His current research interests include interactive smart spaces, accessibility, human-robot interaction, and human-centered artificial intelligence. Fabio has held various leadership roles in major international conferences in the human-computer interaction area, and is currently co-chair of the ACM Intelligent User Interfaces 2025 conference. Among various awards, he has been named ACM Distinguished Scientist, IFIP Fellow, and member of the SIGCHI Academy.


Abstract
How people interact with digital technologies is currently caught between the Internet of Things and Artificial Intelligence. Such technological trends provide great opportunities, new possibilities, but there are also risks and new problems. There can be intelligent services that eventually generate actions that do not match the real user needs. People may have difficulties understanding how to personalize the automatically generated automations. Thus, a fundamental challenge is how to provide tools that allow users to control and configure smart environments consisting of hundreds of interconnected devices, objects, and appliances. This means designing tools that allow people to obtain “humanations” (automations that users can understand and modify). This talk will discuss concepts and methods that can be useful to address the core challenges of human control over automations that can involve people, objects, devices, intelligent services, and robots. The goal is to identify innovative approaches to support end users, even without programming experience, to understand, create or modify the automations in their daily environments, augmenting the human capacity to manage automations through effective interaction modalities, relevant analytics, understandable explanations, and intelligent recommendations. From this perspective, I will discuss how trigger-action programming can be a useful connection point between a wide variety of technologies and implementation languages and people without programming experience. However, it also presents nuances that may become apparent and critical in realistic cases, generating undesired effects. Aspects that should be considered carefully include temporal aspects of triggers and actions, configuring smart environments with multiple active automations, and security and privacy issues. I will also discuss current practises, some experiences in application domains, such as the smart home, the opportunities provided by interaction modalities such as conversational agents and mobile augmented reality, how to provide explanations to make the behaviour of the smart space more transparent, and possible future research directions.



 

 

Towards a Meaningful Robot-Assisted Neurorehabilitation Experience

Laura Marchal-Crespo
Department Cognitive Robotics, TU Delft
Netherlands
 

Brief Bio
Laura Marchal-Crespo is an Associate Professor at the Department of Cognitive Robotics, Faculty of Mechanical Engineering, Delft University of Technology, the Netherlands. She is also associated with Erasmus Medical Center, Rotterdam, the Netherlands. Her research focuses on the general areas of human-machine interaction and biological learning and, in particular, the use of robotic devices and immersive virtual reality for the assessment and rehabilitation of patients with acquired brain injuries such as stroke.


Abstract
Every year, millions of stroke survivors lose their functional autonomy due paralysis, posing a tremendous societal and economic challenge. In absence of a cure for stroke, clinical evidence suggests that patients should engage in personalized, task-specific, high-intensity training to maximize their recovery. In this talk, I put forward a new mindset to overcome many of the fundamental limitations of traditional approaches in stroke neurorehabilitation. I present the new trends in rehabilitation robotics and immersive virtual reality that leverage realistic interaction with tangible virtual objects and discuss how a better understanding of human skill acquisition can improve neurorehabilitation approaches.



footer