Abstract: |
Within the so-called “Diversity Computing Spaces project” (DivComp), funded by the FWF (1), we aimed to develop two Diversity Computing Spaces, based on the concept of “Diversity Computing” [3].
As described by principal investigator Christopher Frauenberger, “[...] This project takes a design research approach to investigate how interactive technologies can create smart, physical spaces that scaffold shared, meaningful experiences for diverse groups of people.” (2). Our overall context is a school setting with 10-14 year old students.
While we already successfully presented the design process at CHI [2, 1], this abstract and the (hopefully!) accompanying demonstration aim to show the two resulting spaces in detail and action.
The first diversity computing space consists of four areas of activity: A) a “bench” consisting of five cushions which act as buttons (true/false, i.e., a person sits on it vs. no one sits on it); B) a giant round glass drawing board (diameter: 60cm), and C) two smaller glass drawing boards. All drawing boards are filmed with webcams from below and streamed to three corresponding screens.
To make the space more engaging, the video streams are altered through different effects using Shadertoy (3). The effects chosen are influenced by how many students sit on the bench and which cushions are used. The results are presented on one big and two small screens. The big screen shows the video feed of the large drawing area, and the small screen shows the corresponding smaller drawing areas. However, the big screen via picture-in-picture also shows the two smaller drawing areas. The whole space is connected through a custom-made scaffold, separating it from its surroundings while making it an open and inviting area.
The second diversity computing space centers around a custom-made table where people can kneel around on different soft carpets or pillows. It consists of three activity areas, each being an iPad (10th generation). Each iPad offers a different input our output modality. One iPad acts as a keyboard and allows textual input (“storytelling”), and another iPad allows drawing with one’s fingers (“illustrating”). Here, the line thickness and the color of the line can be chosen freely. The third iPad shows the result (“artwork”) composed by a locally running generative artificial intelligence from the prompt and input drawing. The output is an image that takes the two inputs as a basis.
Both interactions are very different, yet aim towards the same goal: to create a meaningful interaction for all people engaging with(in) those spaces. In both cases, the adolescents can take on different roles and switch them as they please. People can focus on drawing, figuring out how the seating arrangement influences the effects or just watching.
In the second prototype, they can work together or even against each other. For example, they can draw and write towards the “same” goal, i.e., a specific image. However, they can also completely contradict each other, and look what happens when they “confuse” the AI.
Regardless of the results and role an individual takes on in these interactions, a “side goal” is to expose the students to the concepts of video effects and, more importantly, the strengths and weaknesses/pitfalls of generative AI.
(1) Austrian Science Fund
(2) https://divcomp.frauenberger.name/en
(3) https://www.shadertoy.com/
[1] DOI 10.1145/3613904.3642240.
[2] 10.1145/3544548.3581155.
[3] 10.1145/3243461. |