catlab

PUBLISHED:Detecting Automation Failures in a Simulated Supervisory Control Environment

Foroughi, C. K., Sibley, C., Brown, N. L., Rovira, E., Pak, R., & Coyne, J. T. (2019). Detecting Automation Failures in a Simulated Supervisory Control Environment. Ergonomics, 1–22. https://doi.org/10.1080/00140139.2019.1629639

The goal for this research was to determine how individuals perform and allocate their visual attention when monitoring multiple automated displays that differ in automation reliability. Ninety-six participants completed a simulated supervisory control task where each automated display had a different level of reliability (viz., 70%, 85%, and 95%). In addition, participants completed a high and low workload condition. The performance data revealed that 1) participants’ failed to detect automation misses approximately 2.5 times more than automation false alarms, 2) participants’ had worse automation failure detection in the high workload condition, and 3) participant automation failure detection remained mostly static across reliability. The eye tracking data revealed that participants spread their attention relatively equally across all three of the automated displays for the duration of the experiment. Together, these data support a system-wide trust approach as the default position of an individual monitoring multiple automated displays.

Practitioner Summary: Given the rapid growth of automation throughout the workforce, there is an immediate need to better understand how humans monitor multiple automated displays concurrently. The data in this experiment support a system-wide trust approach as the default position of an individual monitoring multiple automated displays.

Keywords: automationautomation failureshuman-automation interactionsupervisory controlattention allocationsystem-wide trusteye-tracking