AI in Aviation Safety Management: Part 5 – Practical Use Cases
Where AI Can Help Today — And Where It Shouldn’t (Yet)
Over the past several weeks, we’ve explored the foundations of AI in aviation safety management—what it is, how it should be governed, how data shapes its effectiveness, and the human factors risks that come with it.
By this point, most safety leaders are no longer asking “What is AI?”
They’re asking something far more practical:
“What can we actually use this for—today?”
The answer is both encouraging and cautionary.
You don’t need a fully integrated system.
You don’t need to upload your entire SMS database.
And you don’t need to overhaul how your organization operates.
But you do need to understand where AI can genuinely help—and where it has no place (yet).
Start Here: AI as an Assistant, Not a Decision-Maker
Before getting into specific use cases, it’s important to anchor one principle:
AI should support your SMS—not run it.
AI is effective when it:
Accelerates routine work
Improves consistency
Prompts better thinking
AI is risky when it:
Replaces judgment
Makes decisions
Operates without oversight
If your Safety Management System is the structure, AI is simply a tool within it—not the system itself.
Where AI Can Help Today (Low Risk, High Value)
Most airports already have the core elements of an SMS in place:
Reporting systems (paper, spreadsheets, or platforms like Vortex or Wombat)
Investigation processes
Hazard registers
Corrective action tracking
The challenge is rarely the absence of a system.
The challenge is capacity, consistency, and insight.
That’s where AI can begin to add real value.
1. SMS Guidance and On-Demand Support
Even experienced SMS managers don’t have every regulation, procedure, or best practice at their fingertips.
AI can act as an on-demand reference tool:
Structuring investigations
Explaining risk assessment approaches
Providing examples of hazard categories
Offering reminders of what “good” looks like
Used this way, AI helps reduce reliance on memory and supports less experienced team members.
But it’s important to be clear:
AI can guide—but it cannot validate compliance.
That’s still the role of your organization—and where experienced partners like Acclivix continue to add value.
2. Investigation Support (Not Investigation Replacement)
Investigations are one of the most resource-intensive parts of an SMS—and one of the most inconsistent.
AI can help:
Draft interview questions
Organize investigator notes
Suggest potential contributing factors
Turn rough notes into structured reports
It can also challenge thinking:
“What factors might I be overlooking?”
This is where AI shines—as a thinking partner.
But it must not:
Determine root cause
Validate evidence
Replace investigator judgment
A poorly conducted investigation, even if well-written, is still a poor investigation.
3. Hazard Identification and Risk Thinking
Hazard registers often become static over time.
Teams focus on what has already happened, rather than what could happen.
AI can help expand that thinking:
Brainstorm hazards in specific operational contexts
Explore “what-if” scenarios
Identify precursors to known risks
For example:
Winter operations
Low visibility conditions
Airside vehicle movements
This supports a more proactive SMS, rather than a purely reactive one.
4. Building Better Corrective Action Plans
Corrective actions are frequently one of the weakest parts of an SMS:
Too generic
Not tied to root causes
Difficult to measure for effectiveness
AI can assist by:
Structuring actions (immediate, short-term, long-term)
Suggesting ways to monitor effectiveness
Linking actions back to underlying issues
For example:
“Based on this root cause, what would strong corrective actions look like?”
The result is not a final answer—but a better starting point.
5. Documentation and Communication
SMS outputs take time:
Investigation reports
Safety bulletins
Internal briefings
Toolbox talks
AI can significantly reduce that burden by:
Drafting content
Simplifying technical language
Translating materials (particularly valuable in Canadian operations)
This allows safety leaders to spend less time writing—and more time engaging with their teams.
The Big Question: Do We Have to Share Our Data?
For many organizations, this is where interest in AI stops.
And understandably so.
Safety data is sensitive.
Operational data is sensitive.
And trust—internally and externally—is critical.
The good news:
You do not need to upload your entire SMS database to benefit from AI.
In fact, you shouldn’t.
Practical approaches include:
Using de-identified summaries
Describing scenarios rather than sharing raw reports
Removing names, identifiers, and sensitive details
For example, instead of uploading a full occurrence report:
“An aircraft was pushed back and struck ground equipment due to miscommunication between ground crew and flight crew…”
This allows you to gain insight without exposing your data.
The key is not integration—it’s intentional use.
Where AI Should Not Be Used (Yet)
Just because AI can assist in many areas does not mean it belongs everywhere.
There are clear boundaries.
AI should not be used for:
Final risk acceptance decisions
Regulatory compliance sign-off
Sole-source investigations
Real-time operational decision-making
Why?
Because these require:
Accountability
Context
Experience
Judgment
AI has none of these.
Where Acclivix Fits
AI can help you do more with your SMS.
Acclivix helps ensure you’re doing the right things, the right way.
That includes:
Designing effective, auditable SMS processes
Strengthening investigation and risk methodologies
Validating outputs against regulatory expectations
Helping organizations integrate tools—like AI—responsibly
AI can enhance your system.
But it cannot build, validate, or sustain it.
A Practical Way Forward
You don’t need a formal AI strategy to begin.
Start small:
Use AI to support one part of your SMS
Keep a human in the loop
Be intentional about how and when it’s used
Because the real question isn’t:
“Should we be using AI?”
It’s:
“Are we using it deliberately—or not at all?”
And in a system built on continuous improvement, doing nothing may be the bigger risk.