Abstract

In the rapidly evolving domain of human-computer interaction (HCI), ensuring a high-quality user experience (UX), which encompasses all aspects of a user’s interaction with an interface, is critical to the success and adoption of technology. Central to this is the concept of usability: the degree to which an interface can be effectively, efficiently, and satisfactorily used by its intended audience. Usability testing is a key method for uncovering usability problems, relying on structured evaluations involving real users. However, traditional approaches to analyzing usability test recordings—where UX evaluators observe user behavior and verbalizations while taking notes—are labor-intensive and prone to human error. These challenges are compounded by the well-documented evaluator effect, whereby individual differences among evaluators lead to inconsistent identification and interpretation of usability problems. This dissertation addresses these limitations by proposing a nuanced human-AI collaborative approach to enhance the effectiveness of usability analysis. To identify optimal configurations of such collaboration, I examined four key factors: 1) Representations of AI, comparing non-interactive visualizations, user-directed (passive) conversational assistants (CAs), and system-directed (proactive) CAs; 2) Interaction modalities, contrasting voice- and text-based interactions when engaging with CAs; 3) Timing of suggestions, analyzing the impact of AI-generated suggestions delivered before, during, or after the occurrence of a usability problem; and 4) Perceived expertise, investigating differences in collaboration with CAs simulating novice versus expert UX evaluators. The fourth study also explored how UX evaluators' behaviors and attitudes evolve through long-term collaboration with a CA. This body of work culminates in a comparative evaluation of the usability results produced by human-only, AI-only, and human-AI collaborative analysis. The results demonstrate that well-designed human-AI collaborative configurations yielded significant improvements in both the quality of usability results and the analysis experience. The contributions of this dissertation include: empirical insights into the practices and challenges of usability analysis; evidence on how interaction modality, suggestion timing, and perceived expertise affect evaluators’ analytical behavior and perceptions; AI-powered analysis tools for identifying usability problems; a dataset capturing the types of questions UX evaluators ask of CAs; and a methodology for evaluating the quality of usability results. Collectively, these contributions support a framework for nuanced human-AI collaboration for usability evaluations. The thesis of this dissertation is: Nuanced human-AI collaboration improves the effectiveness of usability evaluations compared to traditional non-AI methods when drawn on evidence from user testing on representations of AI, interaction modalities, timing of suggestions, and perceived expertise.

Library of Congress Subject Headings

Human-computer interaction; Visual analytics; User interfaces (Computer systems)--Design; User-centered system design

Publication Date

5-2025

Document Type

Dissertation

Student Type

Graduate

Degree Name

Computing and Information Sciences (Ph.D.)

Department, Program, or Center

Computing and Information Sciences Ph.D, Department of

College

Golisano College of Computing and Information Sciences

Advisor

Kristen Shinohara

Advisor/Committee Member

Mingming Fan

Advisor/Committee Member

Cecilia O. Alm

Campus

RIT – Main Campus

Plan Codes

COMPIS-PHD

Share

COinS