Abstract

Trust plays a decisive role in the effectiveness of human-AI teams, particularly in tasks that depend on coordinated decision-making under uncertainty. While prior research acknowledges that trust in automation is dynamic, current work provides limited insight into how trust evolves within an interaction, what causes it to become miscalibrated, and how transparency affects these processes. This thesis examines trust calibration in a controlled 2-D grid-world search-and-rescue environment, where 54 participants collaborated with an AI teammate presented through four communication modes based on the Ability, Benevolence, and Integrity (ABI) framework. The study uses secondary analysis of experimental data to observe: (1) changes in self-reported trust across early and late phases of collaboration; (2) the extent to which different transparency modes affect trust and calibration; (3) behavioral indicators of trust miscalibration, including acceptance of AI suggestions, success rates, giving-up behavior, and lying; and (4) whether calibrated trust is defined as alignment between human reliance and AI performance predicts team outcomes. Analyses include reliability checks of trust scales (Cronbach’s =0.84–0.93), mixed-effects models for within-session trust changes, Welch ANOVA for moderated differences, and OLS regression models linking calibration to performance. Results indicate that trust increased modestly but not significantly over time (p = .116). Transparency mode effects on trust and calibration were generally non-significant, suggesting that lightweight textual cues alone did not meaningfully shift reliance alignment. However, the Benevolence modeexhibited a significant negative interaction with late-phase trust ( =-0.495, p = .009), indicating that benevolence framing without clear competence signals may depress trust. The most robust finding was the strong positive relationship between calibration and performance: Phase Z calibration explained 94.7 The findings highlight the importance of designing AI team that communicates competence and limitations clearly. Recommendations for future work include pairing transparency with demonstrated competence, providing real-time calibration feedback, and prioritizing reliance alignment over trust maximization.

Publication Date

12-2025

Document Type

Thesis

Student Type

Graduate

Degree Name

Professional Studies (MS)

Department, Program, or Center

Graduate Programs & Research

Advisor

Sanjay Modak

Advisor/Committee Member

Parthasarathi Gopal

Campus

RIT Dubai

Share

COinS