catlab

[PUBLISHED] The complex relationship of AI ethics and trust in human–AI teaming: insights from advanced real-world subject matter experts

Our latest paper is published:

Lopez, J., *Textor, C., *Lancaster, C., *Schelble, B., Freeman, G., Zhang, R., McNeese, N., & Pak, R. (2023). The complex relationship of AI ethics and trust in human–AI teaming: insights from advanced real-world subject matter experts. AI and Ethics, 1-21.

Download PDF

Abstract: Human-autonomy teams will likely first see use within environments with ethical considerations (e.g., military, healthcare). Therefore, we must consider how to best design an ethical autonomous teammate that can promote trust within teams, an antecedent to team effectiveness. In the current study, we conducted 14 semi-structured interviews with US Air Force pilots on the topics of autonomous teammates, trust, and ethics. A thematic analysis revealed that the pilots see themselves serving a parental role alongside a developing machine teammate. As parents, the pilots would feel responsible for their machine teammate’s behavior, and their unethical actions may not lead to a loss of trust. However, once the pilots feel their teammate has matured, their unethical actions would likely lower trust. To repair that trust, the pilots would want to understand their teammate’s processing, yet they are concerned about their ability to understand a machine’s processing. Additionally, the pilots would expect their teammates to indicate that it is improving or plans to improve. The findings from this study highlight the nuanced relationship between trust and ethics, as well as a duality of infantilized teammates that cannot bear moral weight and advanced machines whose decision-making processes may be incomprehensibly complex. Future investigations should further explore this parent–child paradigm and its relation to trust development and maintenance in human-autonomy teams.