Human Factors for Maintainers in a Digital World

Human Factors for Maintainers in a Digital World

The aviation maintenance industry stands at a particular juncture where traditional hands-on expertise meets ever-evolving digital technology. As aircraft systems become increasingly sophisticated and maintenance operations integrate artificial intelligence, predictive analytics, and automated monitoring systems, the human element remains both the most important asset and ultimate decision maker.

Human factors in aviation maintenance have evolved far beyond the foundational concerns of tool management and procedural compliance. Today’s maintenance technicians must navigate a hybrid cognitive landscape where physical dexterity intersects with data interpretation, where experience-based intuition must coexist with algorithm-generated recommendations, and where fatigue management requires both technological sophistication and fundamental human awareness.

The integration of fatigue risk management systems (FRMS) within safety management systems (SMS) represents a paradigm shift towards a predictive safety culture. Simultaneously, the emergence of AI-powered predictive maintenance systems challenges technicians to become interpreters of probabilistic data while maintaining their critical role as decision makers in complex, high-stakes environments.

This convergence of human capability and technological advancement creates new opportunities for safety and efficiency, yet it also introduces cognitive challenges that can have profound implications for aviation safety. Understanding how cognitive biases influence the adoption of AI systems, how shifting workloads affect mental processing, and how automated tools can either support or undermine human judgment is becoming essential for maintenance organizations.

In this feature, we examine these critical intersections, drawing insights from industry experts to illustrate both the promise and the pitfalls of current technological evolutions.

FRMS, SMS, and Technology-Based Fatigue Monitoring

Automated FRMSs are designed to integrate with an organization’s SMS, according to Michael Parrish, president of Elliott Aviation. “They can provide information that helps make better planning and staffing decisions within a safety framework. They use data and science to help predict and manage fatigue risk, but they do not replace human judgment,” he says. “Fatigue management is ultimately aimed at ensuring the safety and effectiveness of teams. If automated tools can help achieve this goal without adding unnecessary burden, they are worth exploring as part of an overall safety strategy.”

Michael Parrish, Elliott Aviation
Michael Parrish, Elliott Aviation

Dr. Antonio Cortés of GMR Human Performance affirms that FRMS seek to prevent one of the most basic factors and causes of human error, i.e., the significant impairment of human performance once a certain level of fatigue has been exceeded. “We know that fatigue directly impacts many of our personal actions, such as increasing the risk of cognitive fixation, sometimes referred to as channelled attention or tunnelling, or distraction. It is no surprise, then, that FRMS are promoted by EASA, FAA, and ICAO, among other agencies. A good FRMS will address the main causes of fatigue, which are sleep loss, circadian misalignment and workload. By integrating fatigue monitoring as a key component of hazard identification and risk assessment into the SMS’ systematic safety processes, better performance is achieved,” he says. “It is like enhancing our efforts. Furthermore, such anti-fatigue interventions should not be performed only at the technician or inspector level. They require teamwork, close collaboration, and a coordinated approach, many of which should already be present with a good SMS. By integrating this commitment into an SMS, one should aim to also benefit from the established identity protection measures that help foster a Just Culture and encourage voluntary reporting of fatigue-related safety issues.”

Dr. Antonio Cortés, GMR Human Performance
Dr. Antonio Cortés, GMR Human Performance

An FRMS integrated into existing SMS hazard reporting and corrective action processes, rather than being treated as a standalone initiative, fosters overall safety improvements by learning from fatigue alerts, according to Dr. Cortés. “A technician once told me about his wife, a software engineer, who was working herself to the bone and found herself with five days of extra work due to a single typo. In her exhaustion, she had typed the letter ‘O’ instead of the number ‘0.’ This is exactly the kind of error an FRMS should help us avoid — small oversights that become major headaches when fatigue is left unchecked,” he points out. “How would FRMS handle such a situation, if it were self-reported by the individual, if it were managed autonomously, outside of an SMS? Would the system find a Just Culture solution?”

More sophisticated FRMSs can leverage machine learning algorithms to monitor work cycles, circadian rhythms, weather impacts, and even data from wearable sensors to estimate fatigue levels, observes Dr. Cortés. “Such a system can even predict a high fatigue window, recommend postponing non-critical inspections, and assign a second technician in such cases. This growing sophistication and use of technology in FRMSs is also occurring in SMS,” he says. “However, low-tech approaches guided by human factors principles should not be overlooked. These approaches can include developing policies for work-rest scheduling practices and promoting healthy habits that become almost automatic, such as listing and verifying steps on a checklist to avoid overlooking an indication or condition, maintaining hydration, double-checking work, or learning to detect mutual fatigue symptoms using a ‘buddy’ system.”

Jonathan Huff, TeamViewer
Jonathan Huff, TeamViewer

Automated fatigue risk management belongs inside the SMS as an enabler, not as a parallel system, according to Jonathan Huff, senior solutions engineer at TeamViewer. “When paired with TeamViewer Frontline’s augmented reality (AR)-enabled workflows, fatigue-related signals become contextual, actionable inputs that strengthen the SMS’ core pillars: hazard identification, risk assessment, mitigation, assurance and promotion of a just safety culture,” he says. “When fatigue monitoring is implemented through an AR-native platform like TeamViewer Frontline, it becomes a practical safety layer: earlier detection, clearer mitigations, and objective records that support continuous improvement — provided design respects human factors, preserves worker control and avoids creating new cognitive or administrative burdens.”

TeamViewer says it connects people and technology through AR-powered workflows that transform maintenance, training and aviation operations into a more efficient digital workplace. TeamViewer image.
TeamViewer says it connects people and technology through AR-powered workflows that transform maintenance, training and aviation operations into a more efficient digital workplace. TeamViewer image.

Human factors principles should guide the implementation of technology-based fatigue monitoring to avoid creating additional stressors, including prioritizing situational fit and minimal interruption, affirms Huff. “AR prompts and fatigue alerts should be designed so they appear only when relevant to the task phase, and present concise, actionable guidance rather than lengthy diagnostics. Head-mounted displays and voice control should be used to keep technicians focused on the physical task and reduce the cognitive cost of shifting attention,” he says. “Technicians should be allowed to acknowledge an alert, request a remote expert, or follow a defined mitigative workflow rather than enforcing a one-size-fits-all lockout. Using TeamViewer Frontline’s guided workflows in training, workers experience the fatigue-mitigation workflows in low-risk settings before relying on them in service. Lastly, routine follow-ups and reporting should be automated to avoid increasing administrative workload, and TeamViewer Frontline dashboards should be used to make organizational risk visible without manual collation.”

Human factors considerations must evolve alongside technological capabilities. The goal is not to eliminate human judgment but to enhance it through structured verification processes, collaborative interfaces and escalation protocols that honor both algorithmic insights and experiential wisdom.
Human factors considerations must evolve alongside technological capabilities. The goal is not to eliminate human judgment but to enhance it through structured verification processes, collaborative interfaces and escalation protocols that honor both algorithmic insights and experiential wisdom.

Human factors initiatives not directly related to fatigue prevention also help prevent fatigue-related errors, such as automatic toolboxes that alert about tool shortages at the end of the shift, according to Dr. Cortés. “The more one invests in raising awareness of how it is not possible to simply ‘tackle fatigue’ with force, and the more one learns about maintenance resource management (MRM) procedures and habits to fatigue-proof tasks, the fewer unwanted maintenance events one will experience,” he says. “The current SMS framework can be leveraged for FRMS purposes by defining policies, identifying and managing fatigue-related risks, measuring these factors as part of safety assurance and promoting awareness that fatigue can be lethal.”

Cognitive Workload Changing and Digital System Interaction

Parrish points out that with the rise of technology in maintenance processes, the cognitive workload for technicians is changing. “Traditional maintenance relied largely on manual skills and experience, while today’s work often involves interacting with digital systems, troubleshooting software and interpreting data, in addition to hands-on tasks. This requires technicians to span different types of thinking. They must move from physical, task-based work to analyzing information and making decisions based on technology reports. To adapt, we focus on training, soft skills, and providing teams with the tools they need to gain confidence in using new systems,” he says. “The goal is to ensure that technology supports our technicians rather than creating unnecessary complexity. By providing clear procedures, ongoing training, and access to resources, we help our teams manage these transitions effectively while maintaining the high standards of safety and quality our customers expect.”

Dr. Phillip Jasper, J.S. Held
Dr. Phillip Jasper, J.S. Held

According to Dr. Phillip Jasper, principal on the Human Factors and User Research team at J.S. Held, the current transformation in cognitive workload introduces dual task demands, requiring technicians to seamlessly transition from hands-on, mechanical work to interacting with digital interfaces. “This cognitive switching can lead to increased task fragmentation, increased mental workload, and workflow disruptions. That said, not all impacts are cause for concern, as well-designed automated systems can significantly increase efficiency and ease technicians’ workload by taking responsibility for routine tasks, allowing them to focus exclusively on those that truly require their attention. Conversely, poorly designed systems often require constant input or supervision, further increasing technicians’ workload rather than alleviating it,” he says. “Adapting to these changes requires thoughtful interface design; for example, minimizing cognitive load through intuitive layouts, as well as training that reflects real-world task transitions. Simulation-based training or hands-on scenario exercises that include both digital and manual transitions can help crews develop the mental models needed to operate confidently in hybrid workflows.”

AR-assisted maintenance does not eliminate cognitive work, but it redistributes and refines it, according to Huff. “A TeamViewer Frontline’s peculiarity is that it shifts routine information handling away from fragile human memory while preserving the technician’s role as the critical decision maker. With thoughtful training, interface design, and governance, teams can reduce unnecessary cognitive switching, improve throughput, and raise safety and quality simultaneously,” he says.

AI-Powered Predictive Maintenance

AI-based predictive maintenance has the potential to improve efficiency by predicting and reporting problems before they occur, Dr. Jasper affirms. “However, these systems shift the technician’s role from problem solver to interpreter of probabilistic data, a cognitively different task. One of the primary challenges of human factors in this context is understanding uncertainty,” he says. “AI systems often provide probabilities or confidence levels, which technicians must translate into actionable decisions. Another challenge is overconfidence and under confidence. If AI predictions are accepted without thorough analysis, critical issues may go undetected or, conversely, ignored due to scepticism. Finally, alert fatigue, a well-known challenge that human factors scientists have been discussing for decades, can desensitize users to frequent and ineffective alerts, similar to the problems observed in cockpit warning systems.”

AI-powered predictive maintenance systems reduce the mental burden on users by automating data analysis, detecting faults early, simplifying interfaces, and offloading routine decisions, affirms Huff. “However, risks include overreliance on AI, diminished user skills, reduced trust under cognitive overload, and increased mental strain from poorly designed interfaces,” he says.

Dr. Cortés believes that too often, when referring to human factors, there is a tendency to discuss obvious and complex topics, like distractions and communication breakdowns, without realizing that there are many smaller, subtle effects that impact human performance in unexpected but significant ways. “For example, the thinking biases all humans have influence the adoption and use of AI-based predictive maintenance systems for decision making. Biases are sometimes difficult to understand, but they can influence human thinking in ways that silently have a significant impact, especially if one’s own biases begin to feed back into oneself,” he says. “Many people find it flashy to talk about artificial intelligence, the incredible practical insights AI can generate in maintenance, or about how it helps anticipate problems while simultaneously improving safety, reliability, and operational efficiency. But AI proponents themselves often shy away from discussing data, as such conversations lack the brilliance of AI.”

If the goal is to have reliable AI, whether for predictive maintenance or other processes, there must first be a discussion on data quality and completeness, affirms Dr. Cortés. “For AI-based systems, data is essentially the fuel of the system. Poor data quality or an incomplete amount of data can produce inaccurate AI output. This explains the growing emphasis, rightfully so, with the ‘certified data’ that fuels AI systems. Predictive excellence depends on data quality,” he says. “There is a tendency to overlook the quality and completeness of data when talking about AI. Humans are naturally drawn to new and exciting ideas, a tendency some call ‘novelty bias’. There is a predisposition to be captivated by the new and shiny, to the detriment of familiar fundamentals. Furthermore, humans fall prey to the availability heuristic, judging importance based on what comes to mind first. Because AI success stories are everywhere, one can overestimate the true effectiveness of AI and pay less attention to hidden factors. Humans may also fall victim to an overconfidence bias, which leads to overestimate the reliability of data and AI systems.”

To address current challenges, AI system design should promote transparency, while maintenance organizations may consider investing in training that teaches not only how to use AI but also how to think in concert with it, according to Dr. Jasper. “This builds appropriate trust and preserves the technician’s critical thinking role. However, caution is needed, as AI is not yet sufficiently advanced to serve as a completely reliable tool for detecting fatigue or other cognitive states,” he says. “More research is needed before we can accurately measure the complex processes of the human brain, let alone rely on AI to interpret that information or generate recommendations based solely on its inputs.”

Cognitive Bias and Over-Reliance Issues

When working with AI-generated recommendations, one of the challenges is the potential risk of cognitive bias, observes Parrish. “Technicians may become overly reliant on the system, assuming its recommendations are always correct, or they may underestimate valuable information if it conflicts with their own experience. Confirmation bias and automation bias are common examples that can influence the decision-making process. Balancing human expertise with system recommendations requires a structured approach,” he says. “Technicians should be trained to critically evaluate data and use AI insights as one input among many. Clear procedures, cross-checks, and open communication help ensure that human judgment remains central, especially when recommendations conflict with practical experience. The key is to view technology as a tool to enhance the decision-making process, not replace it, so that safety and quality remain the top priorities.”

Huff illustrates some other cognitive biases which may be observed when working with AI systems. “The anchoring bias occurs when AI recommendations ‘anchor’ a user’s thinking, making it harder to consider alternative options. The authority bias is when users treat AI systems as authoritative, leading to blind trust in outputs. The framing effect is when the way AI presents information influences decisions, even if the underlying data is the same,” he says. “By fostering collaboration rather than competition between AI and human expertise, organizations can unlock the full potential of predictive maintenance while minimizing errors and maximizing trust. AI excels at pattern recognition and data crunching, but it lacks contextual intelligence, i.e., the ability to understand why a machine might behave differently in a specific environment or under unusual conditions. Human experts bring intuition, experience and adaptability that AI simply cannot replicate.”

According to Dr. Jasper, organizations should design collaboratively. “Interfaces should present AI data in a way that supports human reasoning; for example, highlighting the reason why a particular prediction was made. Organizations may also consider establishing escalation protocols so that, when AI recommendations conflict with expert judgment, there is a clear and structured way to resolve the discrepancies without fear of repercussions,” he says. “Ultimately, the goal is to create human-AI teams, where each supports the other’s strengths, so that AI provides speed and scalability, while humans contribute context, experience and judgment.”

Summing Up

The future of aviation maintenance lies not in choosing between human expertise and technological capability, but in converging towards their optimal integration. The most significant advances in maintenance safety and efficiency emerge when sophisticated systems enhance rather than replace human judgment, when fatigue management combines scientific rigor with practical awareness, and when cognitive biases are acknowledged and addressed rather than ignored.

The implementation of comprehensive FRMS within existing SMS frameworks demonstrates that effective safety management requires both systematic processes and cultural commitment. Technology can predict fatigue windows and recommend staffing adjustments, but the cultivation of Just Culture principles and voluntary reporting mechanisms depends fundamentally on human leadership and organizational values. The most advanced monitoring systems remain ineffective without the human factors foundation that encourages open communication about safety concerns.

Similarly, the promise of AI-powered predictive maintenance systems will only be realized when organizations invest equally in data quality and the training of human operators. The cognitive shift from problem solver to data interpreter represents a fundamental transformation in the maintenance technician’s role, one that requires thoughtful interface design, comprehensive training programs and the development of new models for hybrid workflows. The recognition that data quality has cornerstone importance underscores that technological advancement must be grounded in meticulous attention to foundational elements.

Perhaps most critically, the challenge of cognitive biases, from automation bias to confirmation bias, reveals that human factors considerations must evolve alongside technological capabilities. The goal is not to eliminate human judgment but to enhance it through structured verification processes, collaborative interfaces and escalation protocols that honor both algorithmic insights and experiential wisdom.

Moving forward, maintenance organizations that embrace this integrated approach will find themselves better positioned to navigate the increasing complexity of modern systems. By fostering human-AI teams where each element supports the other’s strengths, these organizations can achieve the dual objectives of enhanced safety and efficiency while maintaining the human-centred focus that has always been the main asset of aviation safety.

The path ahead demands investment in the more challenging but ultimately more rewarding work of optimizing human-technology partnerships. In this endeavor, the maintenance technician remains what they have always been: a guardian of flight safety, now equipped with predictive tools to fulfill this critical role.