New research utilizing eye-tracking technology has demonstrated the ability to accurately distinguish between the roles of a pilot flying and a pilot monitoring based solely on gaze behavior. These findings indicate that visual scanning patterns are reliable indicators of task engagement and team dynamics in a cockpit environment. The study, which suggests potential advancements for adaptive automation systems, was published in the journal Aviation Psychology and Applied Human Factors.
Sophie-Marie Stasch, Yannik Hilla, and Wolfgang Mack from the University of the Bundeswehr Munich and the University of Zurich conducted this investigation. The researchers sought to address a significant gap in aviation safety regarding how flight crews manage multitasking. Commercial and military flights often require two pilots to coordinate complex duties, yet most research on pilot workload focuses on single operators.
Flight manuals typically prescribe a structured division of labor where tasks are handled sequentially. Real-world operations often force crews to engage in concurrent multitasking due to unexpected events or weather changes. This discrepancy can lead to varying levels of cognitive load that are difficult for automated systems to detect.
The research team aimed to determine if eye-tracking metrics could serve as a diagnostic tool for these dynamic states. Their goal was to support the development of adaptive assistance systems. These future systems would need to perceive the current state of the crew before selecting an appropriate automated intervention.
To investigate this, the researchers recruited 28 participants for the experiment. The sample consisted primarily of officer cadets from the University of the Bundeswehr Munich. The average age of the group was approximately 23 years, and 14 of the participants possessed prior flight experience.
The study utilized a desktop simulation software known as openMATB. This platform is designed to mimic the multitasking demands of a flight deck through four distinct concurrent tasks. Participants were required to keep a cursor centered on a target using a joystick, which simulated manual flight control.
Simultaneously, they monitored gauges for system failures and managed fuel levels between tanks. They also had to respond to specific radio communication signals. The experiment required participants to perform these tasks first individually and then as a two-person team.
In the team condition, one participant assumed the role of the pilot flying. This individual was responsible for the manual manipulation of the joystick. The second participant acted as the pilot monitoring and provided verbal support without making physical control inputs.
The researchers recorded the participants’ eye movements using Tobii Glasses 3. These wearable devices captured data at a rate of 100 times per second. The team focused on specific metrics such as fixation duration, which measures how long the eye rests on a specific object.
They also analyzed the number of times the gaze switched between different tasks on the screen. Another metric called Coefficient K was calculated to assess the mode of visual processing. This metric determines whether a person is using focal vision to concentrate on details or ambient vision to scan the environment.
The researchers also examined “entropy” in the visual scan path. Transition entropy measures the randomness of moving the eyes between different areas of interest. Stationary entropy indicates how evenly visual attention is distributed across all available tasks.
The analysis of the data revealed distinct differences in how the two roles engaged with the visual environment. Participants in the monitoring role tended to fixate longer on the tracking task than those who were manually flying. This finding suggests that the monitoring pilots were heavily invested in observing the active pilot’s performance.
Those in the monitoring role also exhibited a significantly higher number of task switches. This pattern indicates that they had more attentional resources available to scan the various instrument panels. Without the demand of manual control, they could distribute their gaze more broadly.
In contrast, the participants acting as the pilot flying focused more intensely on specific operational tasks. Their visual behavior showed increased fixation frequency on communication and resource management displays. This suggests a prioritization of active system inputs over general monitoring.
The study also found differences in visual processing modes between the two groups. The monitoring pilots displayed negative values for Coefficient K. This value is associated with an ambient processing mode, implying a broader awareness of the overall environment.
The active pilots tended toward positive values, indicative of focal processing. This aligns with the need to concentrate on specific instruments to maintain aircraft control. The data paints a picture of two distinct cognitive states defined by the assigned role.
Beyond statistical differences, the researchers tested whether a computer could automatically identify the pilot’s role. They employed machine learning algorithms to classify the data segments. Specifically, they used a Random Forest classifier to analyze 30-second windows of eye-tracking data.
The classification model achieved a predictive accuracy of 97 percent. It showed equal precision in identifying both the pilot flying and the pilot monitoring. The most important features for this prediction were the duration and number of fixations on the tracking task.
These results provide evidence that eye-tracking is a viable method for real-time user state diagnosis. An adaptive cockpit system could theoretically use this data to understand which pilot is doing what. If a system detects that the pilot flying is becoming overloaded, it could automatically reallocate tasks.
For example, an automated assistant might take over radio communications if it sees the pilot’s gaze becoming too fixated on flight controls. This would close the loop in the “perceive-select-act” cycle of adaptive automation. The system would perceive the workload, select a helpful action, and act to relieve the crew.
There are several limitations to the current study that warrant consideration. The experiment utilized a low-fidelity desktop simulator rather than a full-motion cockpit. This setting may not fully replicate the high-pressure environment of actual flight operations.
The sample size was reduced from the original 40 participants due to technical issues with the recording equipment. A smaller sample size can limit the generalizability of the statistical findings. Additionally, the participants were largely students rather than seasoned airline captains.
The use of wearable eye-tracking glasses introduced some noise into the data due to head movements. In a real cockpit, remote sensors integrated into the dashboard might be necessary for consistent tracking. The current analysis also relied on 30-second intervals, which might be too slow for some emergency situations.
Future research should aim to replicate these findings in high-fidelity simulators with expert pilots. It would be beneficial to investigate how these eye-tracking metrics change during specific emergency procedures. The researchers also suggest integrating other physiological measures, such as heart rate monitoring.
Combining multiple data streams could improve the reliability of the user state diagnosis. It is also necessary to develop algorithms capable of processing the data in near real-time. This would ensure that any adaptive assistance is triggered immediately when a pilot needs it.
The study provides a foundational step toward smarter aviation systems. By making the “invisible” cognitive states of pilots visible to the computer, safety could be significantly enhanced. Understanding team dynamics at this level of detail opens new doors for human-machine teaming.
The study, “The Invisible Copilot? Assessing Task Engagement and Team Dynamics in a Virtual Flight Environment Using Eye-Tracking Metrics,” was authored by Sophie-Marie Stasch, Yannik Hilla, and Wolfgang Mack.
Leave a comment
You must be logged in to post a comment.