My Projects

Construction of SVS: Scale of Virtual Twin's Similarity to Physical Counterpart in Simple Environments

Xuesong Zhang, Adalberto L. Simeone
In Proceedings of the 2024 ACM Symposium on Spatial User Interaction (SUI 2024). Association for Computing Machinery, Article 3, 9 pages.
https://doi.org/10.1145/3677386.3682100

Due to the lack of a universally accepted definition for the term “virtual twin”, there are varying degrees of similarity between physical prototypes and their virtual counterparts across different research papers. This variability complicates the comparison of results from these papers. To bridge this gap, we introduce the Scale of Virtual Twin’s Similarity (SVS), a questionnaire intended to quantify the similarity between a virtual twin and its physical counterpart in simple environments in terms of visual fidelity, physical fidelity, environmental fidelity, and functional fidelity. This paper describes the development process of the SVS questionnaire items and provides an initial evaluation through two between-subjects user studies to validate the items under the categories of visual and functional fidelity. Additionally, we discuss the way to apply it in research and development settings.

Focus Agent: LLM-Powered Virtual Focus Group

Taiyu Zhang, Xuesong Zhang, Robbe Cools, and Adalberto L. Simeone
In ACM International Conference on Intelligent Virtual Agents (IVA 2024). ACM, 1-10.
https://doi.org/10.1145/3652988.3673918

In the domain of Human-Computer Interaction, focus groups represent a widely utilised yet resource-intensive methodology, often demanding the expertise of skilled moderators and meticulous preparatory efforts. This study introduces the “Focus Agent,” a Large Language Model (LLM) powered framework that simulates both the focus group (for data collection) and acts as a moderator in a focus group setting with human participants. To assess the data quality derived from the Focus Agent, we ran five focus group sessions with a total of 23 human participants as well as deploying the Focus Agent to simulate these discussions with AI participants. Quantitative analysis indicates that Focus Agent can generate opinions similar to those of human participants. Furthermore, the research exposes some improvements associated with LLMs acting as moderators in focus group discussions that include human participants.

ARcoustic: A Mobile Augmented Reality System for Seeing Out-of-View Traffic

Xuesong Zhang, Xian Wu, Robbe Cools, Adalberto L. Simeone, Uwe Gruenefeld
In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2023). Association for Computing Machinery, 178–190.
https://doi.org/10.1145/3580585.3606461

Locating out-of-view vehicles can help pedestrians to avoid critical traffic encounters. Some previous approaches focused solely on visualising out-of-view objects, neglecting their localisation and limitations. Other methods rely on continuous camera-based localisation, raising privacy concerns. Hence, we propose the ARcoustic system, which utilises a microphone array for nearby moving vehicle localisation and visualises nearby out-of-view vehicles to support pedestrians. First, we present the implementation of our sonic-based localisation and discuss the current technical limitations. Next, we present a user study (n = 18) in which we compared two state-of-the-art visualisation techniques (Radar3D, CompassbAR) to a baseline without any visualisation. Results show that both techniques present too much information, resulting in below-average user experience and longer response times. Therefore, we introduce a novel visualisation technique that aligns with the technical localisation limitations and meets pedestrians’ preferences for effective visualisation, as demonstrated in the second user study (n = 16). Lastly, we conduct a small field study (n = 8) testing our ARcoustic system under realistic conditions. Our work shows that out-of-view object visualisations must align with the underlying localisation technology and fit the concrete application scenario.

CReST: Design and Evaluation of the Cross-Reality Study Tool

Robbe Cools, Xuesong Zhang, Adalberto L. Simeone
In Proceedings of the 22nd International Conference on Mobile and Ubiquitous Multimedia (MUM 2023). Association for Computing Machinery, 403–413.
https://doi.org/10.1145/3626705.3627803

In this work, we describe our experience developing and evaluating the Cross-Reality Study Tool (CReST), which allows researchers to conduct and observe Virtual Reality (VR) user studies from an Augmented Reality perspective. So far, most research on conducting VR user studies has centred around tools for asynchronous setup, data collection and analysis, or (immersive) replays. Conversely, CReST is centred around supporting the researcher synchronously. We replicated three VR studies as example cases, applied CReST to them, and conducted an interview with one author of each case. We then performed a case study, and recruited 17 participants to take part in a user study where the researchers used CReST to observe participant interaction with virtual drawer and closet artefacts. We make CReST available for other researchers, as a tool to enable direct observation of participants in VR, and perform rapid, qualitative evaluations.

Using the Think Aloud Protocol in an Immersive Virtual Reality Evaluation of a Virtual Twin

Xuesong Zhang, Adalberto L. Simeone
In Proceedings of the 2022 ACM Symposium on Spatial User Interaction (SUI 2022). Association for Computing Machinery, Article 13, 8 pages.
https://doi.org/10.1145/3565970.3567706

Employing virtual prototypes and immersive Virtual Reality (VR) in usability evaluation can save time and speed up the iteration process during the design process. However, it is still unclear whether we can use conventional usability evaluation methods in VR and obtain results comparable to performing the evaluation on a physical prototype. Hence, we conducted a user study with 24 participants, where we compared the results obtained by using the Think Aloud Protocol to inspect an everyday product and its virtual twin. Results show that more than 60% of the reported usability problems were shared by both the physical and virtual prototype, and the in-depth qualitative analysis further highlights the potential of immersive VR evaluations. We report on the lessons we learned for designing and implementing virtual prototypes in immersive VR evaluations.

Using Heuristic Evaluation in Immersive Virtual Reality Evaluation

Xuesong Zhang, Adalberto L. Simeone
MUM '21: Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia
https://doi.org/10.1145/3490632.3497863

Previous works show that virtual reality itself can be used as a medium in which to stage an experimental evaluation. However, it is still unclear whether conventional usability evaluation methods can directly be applied to virtual reality evaluations and whether they will lead to similar insights when compared to equivalent real-world lab studies. Therefore, we conducted a user study with nine participants, comparing Heuristic Evaluation (HE) for the evaluations of a novel smart artefact. We asked participants to evaluate the physical prototype and their virtual counterparts in the real-world and the virtual environment, respectively. Results show the HE have similar performance when evaluating artefacts usability in VR and real-world in terms of identified usability problems. The VR implementation has an impact on the immersive VR evaluation result.

Shunfeng’er: A portable solution with Huawei Eyewear to enhance your hearing capability

Xuesong Zhang, Xian Wu, Fei Qu, Adalberto L. Simeone
MobileHCI '22: 24th International Conference on Human-Computer Interaction with Mobile Devices and Services
Most Creative Award in Student Design Competition

Assessing Social Text Placement in Mixed Reality TV

Florian Mathis, Xuesong Zhang, Mark McGill, Adalberto L. Simeone, Mohamed Khamis
IMX '20: ACM International Conference on Interactive Media Experiences
https://doi.org/10.1145/3391614.3399402
BEST WORK-IN-PROGRESS AWARD

TV experiences are often social, be it at-a-distance (through text) or in-person (through speech). Mixed Reality (MR) headsets offer new opportunities to enhance social communication during TV viewing by placing social artifacts (e.g. text) anywhere the viewer wishes, rather than being constrained to a smartphone or TV display. In this paper, we use VR as a test-bed to evaluate different text locations for MR TV specifically. We introduce the concepts of wall messages, below-screen messages, and egocentric messages in addition to state-of-the-art on-screen messages (i.e., subtitles) and controller messages (i.e., reading text messages on the mobile device) to convey messages to users during TV viewing experiences. Our results suggest that a) future MR systems that aim to improve viewers’ experience need to consider the integration of a communication channel that does not interfere with viewers’ primary task, that is watching TV, and b) independent of the location of text messages, users prefer to be in full control of them, especially when reading and responding to them. Our findings pave the way for further investigations towards social at-a-distance communication in Mixed Reality.

Outline Pursuits: Gaze-assisted Selection of Occluded Objects in virtual Reality

Ludwig Sidenmark, Christopher Clarke, Xuesong Zhang, Jenny Phu, Hans Gellersen
CHI '20: CHI Conference on Human Factors in Computing Systems
https://doi.org/10.1145/3313831.3376438

In 3D environments, objects can be difficult to select when they overlap, as this affects available target area and increases selection ambiguity. We introduce Outline Pursuits which extends a primary pointing modality for gaze-assisted selection of occluded objects. Candidate targets within a pointing cone are presented with an outline that is traversed by a moving stimulus. This affords completion of the selection by gaze attention to the intended target's outline motion, detected by matching the user's smooth pursuit eye movement. We demonstrate two techniques implemented based on the concept, one with a controller as the primary pointer, and one in which Outline Pursuits are combined with head pointing for hands-free selection. Compared with conventional raycasting, the techniques require less movement for selection as users do not need to reposition themselves for a better line of sight, and selection time and accuracy are less affected when targets become highly occluded.

Cloud Columba CC2

Designing and implementing the Cloud Columba CC2 09.2019

The Cloud Columba CC2 is a web application for the design and simulation of microfluidic systems. I was a member of Columba Team, designed and implemented the Cloud Columba CC2.

Intelligent Information Lighting at Subway Stations

Project of Designworkshop II SS18: Escaping Flatland 07.2018

The subway is a crucial mode of transportation connecting different areas of a city, but aging infrastructure has resulted in uneven distribution of passengers in different sections of incoming trains. To address this issue, an Intelligent Information Lighting System has been implemented at subway stations to help passengers predict train load levels and improve their experience. This project, carried out by an interdisciplinary team including students from Human-Computer Interaction, Landscape Architecture and Media Informatics, utilized an iterative design process to tackle a real-world urban challenge.