Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Interactive Human Centered Artificial Intelligence: A Definition and Research Challenges
Albrecht Schmidt, Computer Science, Ludwig-Maximilians Universutät München (LMU), Germany

Cognitive Augmentation
Pattie Maes, MIT, United States

Alignment in Augmented Reality: Beyond Sight
Anton Nijholt, Faculty EEMCS, University of Twente, Netherlands

 

Interactive Human Centered Artificial Intelligence: A Definition and Research Challenges

Albrecht Schmidt
Computer Science, Ludwig-Maximilians Universutät München (LMU)
Germany
https://www.um.informatik.uni-muenchen.de/personen/professoren/schmidt/index.html
 

Brief Bio
Albrecht Schmidt is professor for Human-Centered Ubiquitous Media in the computer science department of the Ludwig-Maximilians-Universität München in Germany. He studied computer science in Ulm and Manchester and received a PhD from Lancaster University, UK, in 2003. He held several prior academic positions at different universities, including Stuttgart, Cambridge, Duisburg-Essen, and Bonn and also worked as a researcher at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and at Microsoft Research in Cambridge. In his research, he investigates the inherent complexity of human-computer interaction in ubiquitous computing environments, particularly in view of increasing computer intelligence and system autonomy. Albrecht has actively contributed to the scientific discourse in human-computer interaction through the development, deployment, and study of functional prototypes of interactive systems and interface technologies in different real world domains. His early experimental work addressed the use of diverse sensors to recognize situations and interactions, influencing our understanding of context-awareness and situated computing. He proposed the concept of implicit human-computer interaction. Over the years, he worked on automotive user interfaces, tangible interaction, interactive public display systems, interaction with large high-resolution screens, and physiological interfaces. Most recently, he focuses on how information technology can provide cognitive and perceptual support to amplify the human mind. To investigate this further, he received in 2016 a ERC grant. Albrecht has co-chaired several SIGCHI conferences; he is in the editorial board of ACM TOCHI, edits a forum in ACM interactions, a column of human augmentation in IEEE Pervasive, and formerly edited a column on interaction technologies in IEEE Computer. The ACM conferences on tangible and embedded interaction in 2007 and on automotive user interfaces in 2010 were co-founded by him. In 2018, Albrecht was induced into the ACM SIGCH Academy and in 2020, he was elected into Leopoldina, the Germany academy of natural science. 


Abstract
Artificial Intelligence (AI) has become the buzzword of the last decade. Advances so far have been largely technical and only recently have we been seeing a shift towards focusing on human aspects of artificial intelligence. Particularly the notion of making AI interactive and explainable are in the center, which is a very narrow view. In the talk, I will suggest a definition for “Interactive Human Centered Artificial Intelligence” and outline the required properties to start a discussion on the goals of AI research and the properties that we should expect of future systems. It is central to be able to state who will benefit from a system or service. Staying in control is essential for humans to feel safe and have self-determination. I will discuss the key challenge of control and understanding of AI based systems and show that levels of abstractions and granularity of control are a potential solution. I further argue that AI and machine learning (ML) are very much comparable to raw materials (like stone, iron, or bronze). Historical periods are named after these materials as they have change what humans can build and what tools humans can engineer. Hence, I argue in the AI age we need to shift the focus from the material (e.g. the AI algorithms, as there will be plenty of material) towards the tools that are enabled and that are beneficial for humans. It is apparent that AI will allow the automation of mental routine tasks and that it will extend our ability to perceive things and foresee events. For me, the central question is how to create these tools for amplifying the human mind, without compromising human values.



 

 

Cognitive Augmentation

Pattie Maes
MIT
United States
 

Brief Bio
Pattie Maes is a professor in MIT's Program in Media Arts and Sciences and until recently served as academic head. She runs the Media Lab's Fluid Interfaces research group, which aims to radically reinvent the human-machine experience. Coming from a background in artificial intelligence and human-computer interaction, she is particularly interested in the topic of cognitive enhancement, or how immersive and wearable systems can actively assist people with memory, attention, learning, decision making, communication, and wellbeing.Maes is the editor of three books, and is an editorial board member and reviewer for numerous professional journals and conferences. She has received several awards: Fast Company named her one of 50 most influential designers (2011); Newsweek picked her as one of the "100 Americans to watch for" in the year 2000; TIME Digital selected her as a member of the “Cyber Elite,” the top 50 technological pioneers of the high-tech world; the World Economic Forum honored her with the title "Global Leader for Tomorrow"; Ars Electronica awarded her the 1995 World Wide Web category prize; and in 2000 she was recognized with the "Lifetime Achievement Award" by the Massachusetts Interactive Media Council. She has also received an honorary doctorate from the Vrije Universiteit Brussel in Belgium, and her 2009 TED talk on "the 6th sense device" is among the most-watched TED talks ever.


Abstract
Computers, smartphones and smart watches are generally considered tools that enhance productivity. But while they have put the world’s knowledge at our fingertips, people need additional skills in order to be successful and realize their goals. Maes' work explores how future personal devices may help us with cognitive skills such as attention, motivation, behavior change, memory, creativity, and emotion regulation. Her interdisciplinary research group is inspired by literature from Brain and Cognitive Sciences, makes use of sensor and machine learning technology, and adopts a human-centered Design approach to create and study new wearable and immersive systems that can help people strengthen some of these “soft skills.” While doing so, the group aims to be mindful of ethical and social issues that are critical in designing highly personal enhancement systems.



 

 

Alignment in Augmented Reality: Beyond Sight

Anton Nijholt
Faculty EEMCS, University of Twente
Netherlands
 

Brief Bio
Anton Nijholt received his Ph.D. in computer science from the Vrije Universiteit in Amsterdam. He held positions at various universities, inside and outside the Netherlands. In 1989 he was appointed full professor at the University of Twente in the Netherlands, where he initiated its Human Media Interaction group. During some years he was a scientific advisor of Philips Research Europe, Eindhoven. A few years (2015-2017) he was a global research fellow at the Imagineering Institute in Iskandar, Johor, Malaysia. In 2018 he became a member of Microsoft's Technical Leadership Advisory Board on Brain-Computer Interfaces (BCI). His main research interests are multimodal interaction with a focus on entertainment computing, affect, humor, and brain-computer interfacing. Nijholt, together with many of the fifty Ph.D. students he supervised, wrote numerous journal and conference papers on these topics and acted as program chair and general chair of many large international conferences on entertainment computing, virtual agents, affective computing, faces & gestures, multimodal interaction, computer animation, and brain-computer interfaces. More recently he explores those topics in augmented reality environments. Nijholt is the chief editor of the specialty section Human-Media Interaction of the journals Frontiers in Psychology and Frontiers in Computer Science. He is also series editor of the Springer Book Series on Gaming Media and Social Effects. Recent edited books include the 2019/2020 books “Making Smart Cities More Playable: Exploring Playable Cities” and "Brain Art: Brain-Computer Interfaces for Artistic Expression."


Abstract
In augmented reality, we integrate virtual content with the real world. This requires alignment between the virtual content and the real world. Although the virtual content can address all of our senses, in augmented reality research the focus has been on sight because of it being a dominant sense and because many useful augmented reality applications can be designed from a vision-oriented point of view. Virtual content can as well address the tactile, gustatory, olfactory, and auditory senses, address multisensorial experiences, and can induce crossmodal experiences. Human perception is multisensorial. What does this mean if we introduce virtual humans in augmented reality? In this talk, we consider several types of problems and challenges associated with alignment as we move away from more traditional, only the sight-sense addressing AR applications.



footer