AI in Aviation Safety Management: Part 6 – Executive Oversight

Governing AI in Safety-Critical Systems

Over the past several weeks, we’ve explored AI in Aviation Safety Management from multiple angles—foundation, data and trust, accountability, human factors, and practical application.

Each part has pointed to the same conclusion:

AI is not a replacement for Safety Management Systems.
It is something that must operate within them.

And now, we arrive at the question that ultimately matters most:

Who governs how AI is used in safety-critical environments?

From Curiosity to Responsibility

Across the industry, conversations about AI are no longer theoretical.

Airport teams are exploring how it might support:

  • Reporting and data analysis

  • Wildlife management tracking

  • Winter operations planning

  • Fleet and equipment monitoring

  • Training and AVOP processes

  • Terminal and groundside operations

That curiosity is not only understandable—it’s necessary.

But curiosity without structure creates risk.

Not because AI itself is inherently unsafe, but because ungoverned change inside a safety-critical system always is.

The Real Risk: A Governance Gap

Throughout this series, we’ve emphasized that:

  • AI can produce insights—but not accountability

  • AI can assist decisions—but not own them

  • AI can accelerate processes—but not validate them

And yet, the most likely failure mode isn’t technical.

It’s organizational.

It’s when AI begins to enter operational processes:

  • without clear approval,

  • without defined boundaries,

  • and without executive awareness.

That’s not innovation.

That’s drift.

Accountability Doesn’t Move

Aviation regulation is clear in principle, even if it doesn’t yet speak directly to AI:

Accountability remains with the certificate holder—and ultimately, the Accountable Executive.

Not the system.
Not the vendor.
Not the algorithm.

If an AI-supported process contributes to a safety occurrence,
the organization—not the technology—will be held accountable.

Which means one thing:

AI adoption is not a technical decision. It is a governance decision.

The Executive Role: Set the Conditions

For executives and Accountable Executives, this doesn’t mean saying “no” to AI.

But it does mean ensuring that:

  • AI is introduced deliberately—not informally

  • Its use is visible—not buried within teams

  • It is evaluated like any other operational change

Most importantly:

There should be no surprises.

If AI is influencing how safety data is analyzed, how risks are assessed, or how decisions are made,
leadership must know—and must approve how and why.

A Practical Framework: The 5 Questions Every Executive Should Ask About AI

Before any AI capability is introduced into your operation—whether in SMS or elsewhere—there should be a clear, structured case for its use.

At minimum, leadership should expect answers to five key questions:

1. What problem are we trying to solve?

Is this addressing a defined operational or safety challenge—or simply exploring a new tool?

2. How will this improve safety or decision-making?

What is the expected benefit?
Better data? Faster analysis? Earlier risk identification?

If the benefit isn’t clear, neither is the justification.

3. How will human oversight be maintained?

Where are the checks and balances?
Who validates outputs before they influence decisions?

AI should support human judgment—not bypass it.

4. How are we protecting our data?

What data is being used?
Where is it going?
Who has access?

Your safety data is part of your organization’s risk profile—and must be treated accordingly.

5. How does this integrate with our existing SMS?

Does this align with your current processes—or sit outside them?

If your SMS isn’t already clear, consistent, and auditable,
adding AI will not fix that. It will expose it.

Start With What You Already Have

One of the most important takeaways from this entire series is simple:

AI will not strengthen a weak SMS.

If processes are unclear, inconsistently applied, or not auditable:

  • AI will amplify those gaps

  • Not resolve them

Before introducing AI, organizations should ensure:

  • Processes are defined and followed

  • Roles and responsibilities are clear

  • Data is structured and reliable

  • Management review is active and meaningful

In other words:

Get the fundamentals right first.

A Measured Path Forward

AI has a place in aviation safety.

Used thoughtfully, it can:

  • Enhance visibility

  • Improve efficiency

  • Support better-informed decisions

But it must be introduced with the same discipline applied to any operational change.

That means:

  • Clear intent

  • Structured evaluation

  • Defined oversight

  • Ongoing validation

Not enthusiasm alone.

Final Thought: Lead It—Don’t Let It Happen

AI is already entering the conversation across aviation.

The question is not whether it will be used.

It’s whether it will be led.

For executives, the role is not to become technical experts.

It is to ensure:

  • the right questions are asked,

  • the right controls are in place,

  • and the organization remains accountable for how safety is managed.

Because in aviation,
how something is introduced matters just as much as what is introduced.

Call to Action

If your organization is beginning to explore how AI might support safety, operations, or decision-making, now is the time to ensure it’s approached with structure and oversight.

At Acclivix, we work with airport operators and aviation organizations to:

  • Strengthen Safety Management Systems

  • Define clear, auditable processes

  • Evaluate emerging tools—including AI—within an SMS framework

  • Ensure leadership has visibility and confidence in how safety is managed

Through our partnership with Wombat Safety Software, we also help organizations bring structure, visibility, and accountability to their safety processes—providing a strong foundation before layering in new capabilities.

If you’re starting this conversation internally, we’d be glad to support you.

Previous
Previous

IA dans la gestion de la sécurité aéronautique : Partie 6 – Supervision par la direction

Next
Next

AI in Aviation Safety Management: Part 5 – Practical Use Cases