AI in Aviation Safety Management: Part 4 – Human Factors & Culture

The Automation Trap: AI, Complacency, and Safety Culture

About five and a half years ago, when I started Acclivix, the first course I developed was Introduction to SMS and Human Factors for Airports.

I created it for a simple reason: most of the training I had attended on human factors was built for air carrier operations. It was excellent training—but much of it didn’t translate well to the realities of airport environments.

Airport operations are different. The risks are different. The people performing the work are different.

But one thing is exactly the same.

Safety ultimately depends on human behaviour, human awareness, and human judgement.

That truth hasn’t changed with the arrival of artificial intelligence. In fact, AI may make it even more important.

As aviation organizations begin integrating AI-powered tools into Safety Management Systems, a new human factors challenge is quietly emerging—one that aviation has seen before.

The automation trap.

The Promise of Automation

Aviation has always embraced automation as a way to improve safety.

Over the decades we have seen technologies introduced that dramatically reduced certain types of risk:

  • autopilot systems reducing pilot workload

  • terrain awareness and warning systems preventing controlled flight into terrain

  • collision avoidance systems protecting aircraft in crowded airspace

Automation has undeniably made aviation safer.

But aviation has also learned an important lesson.

Every new layer of automation introduces new human factors risks.

Over-reliance on automation.
Loss of situational awareness.
Skill degradation.
Automation complacency.

These risks do not emerge because automation is flawed. They emerge because human behaviour adapts to the presence of automation.

Artificial intelligence now represents the next stage of that evolution.

AI Enters the Safety Management System

Across aviation, AI-enabled tools are beginning to appear inside Safety Management Systems.

These tools can:

  • analyze large volumes of safety reports

  • identify emerging risk patterns

  • assist with trend analysis

  • generate safety dashboards and predictive indicators

For safety professionals and executives alike, this capability is understandably appealing. Aviation organizations generate large volumes of operational data, and AI systems can process it faster than traditional analysis methods.

In many ways, AI can strengthen an SMS by helping organizations see safety information more clearly.

But this is where a subtle cultural risk begins to appear.

The Automation Trap

The automation trap occurs when people begin to believe that because a system is monitoring safety, they no longer need to actively think about it.

In the context of aviation SMS, this can show up in several ways.

Safety teams may assume the system will detect emerging hazards automatically.

Leaders may rely heavily on dashboards instead of actively engaging with operational realities.

Employees may begin to believe that if something was truly unsafe, the system would flag it.

Over time, safety can quietly shift from being something people actively manage to something they assume the system is managing.

That shift is subtle—but it is dangerous.

Safety Management Systems were never designed to automate safety thinking. They were designed to structure how organizations think about safety.

Artificial intelligence does not change that principle.

When Technology Starts Shaping Culture

Artificial intelligence does more than automate tasks.

It can also influence how people perceive risk, responsibility, and accountability.

If AI tools are not implemented thoughtfully, they may unintentionally weaken several pillars of safety culture.

Reporting Culture

If employees believe systems are constantly monitoring operations, they may feel less personal responsibility to report hazards or concerns.

Human observation and reporting remain essential sources of safety information. AI cannot detect what it is never told.

Learning Culture

Organizations may begin relying on automated analysis instead of actively exploring safety lessons through discussion and operational engagement.

AI can surface patterns, but it cannot fully understand the operational context behind them.

Just Culture

If AI tools begin identifying “risk behaviours” or operational deviations, there is a danger that organizations may treat algorithm outputs as objective truth.

But algorithms do not understand intent, workload pressures, or operational constraints. These require human interpretation.

Informed Culture

Dashboards and summaries can provide useful insights, but they can also create a false sense of understanding if leaders rely solely on what the system presents.

True operational awareness requires conversation, observation, and curiosity.

Safety culture ultimately depends on the participation of people.

No technology can replace that.

The Autopilot Parallel

Aviation has faced a similar challenge before.

When autopilot systems became highly capable, the industry learned that pilots could not simply disengage mentally once automation was active.

Pilots still needed to:

  • monitor the system

  • question what it was doing

  • maintain situational awareness

  • be ready to intervene when necessary

Automation improved safety, but only when humans remained actively engaged.

Artificial intelligence in Safety Management Systems presents a similar challenge.

AI may detect patterns in safety data, but it cannot fully understand:

  • operational nuance

  • emerging organizational pressures

  • informal workarounds

  • cultural signals within a workforce

Those insights still come from people.

The Leadership Responsibility

Executives play a critical role in determining how technology influences safety culture.

If leaders communicate that AI systems will “manage safety,” organizations may begin to adopt a passive mindset.

But if leaders emphasize that AI tools help organizations ask better safety questions, technology can strengthen engagement instead of weakening it.

AI should support safety thinking—not replace it.

Modern safety platforms such as Wombat Safety Software can help organizations organize safety information, track corrective actions, and improve visibility across the Safety Management System. These tools make it easier to see how safety activities connect across an organization.

But they are still tools.

Their value depends entirely on how people interpret the information, challenge assumptions, and act on what the system reveals.

Safety Culture Still Comes First

A strong Safety Management System ultimately depends on culture.

Dr. James Reason famously described safety culture as consisting of several interdependent elements: a reporting culture, a just culture, a learning culture, and an informed culture.

All of these depend on active human participation.

Artificial intelligence can support these elements by helping organizations process information more effectively.

But it cannot replace the behaviours that sustain them.

Curiosity.
Professional skepticism.
Open reporting.
Leadership engagement.

Those remain human responsibilities.

The Real Opportunity

Artificial intelligence has the potential to become a powerful tool for aviation safety.

Used thoughtfully, it can help organizations detect patterns earlier, organize safety information more effectively, and support better decision-making.

But technology should never become a substitute for engagement.

The organizations that succeed will be those that use AI to strengthen safety conversations, not silence them.

Because safety has never depended on systems alone.

It has always depended on people who remain attentive, curious, and willing to question what they see.

A Question for Safety Leaders

As artificial intelligence becomes part of aviation Safety Management Systems, leaders may want to ask an important question:

Are we using technology to support safety thinking, or are we slowly allowing tech to replace it?

The answer to that question will shape how safety culture evolves in the years ahead.

If your organization is exploring how AI and modern safety platforms fit into your Safety Management System—or if you want to ensure technology strengthens your safety culture rather than weakening it—we would welcome the opportunity to continue the conversation.

Reach out to Acclivix to discuss how thoughtful implementation of technology, strong safety leadership, and practical SMS design can work together to support safer aviation operations.

Next
Next

IA dans la gestion de la sécurité aéronautique: Partie 4 – Facteurs humains et culture