AI in Aviation Safety Management: Part 3 – Accountability & Regulation
When AI Is Wrong: Accountability, Auditability, and SMS Compliance
Artificial intelligence is increasingly entering conversations about aviation safety management.
It drafts reports.
It summarizes hazard data.
It analyzes trends.
It suggests risk controls.
And often, it does so confidently.
But confidence is not compliance.
As this series has explored, AI introduces opportunity - and information risk. In Part 3, we turn to a question that executives cannot afford to ignore:
When AI is wrong, who is accountable?
I Don’t Audit Software
As an auditor, I have never written a finding against software.
I have never asked an airport why its system failed as though the system were the responsible party.
Instead, I ask:
Why does your Safety Management Manual (SMM) not reflect regulatory requirements?
Why are you not following the process written in your SMM?
Why is the documented procedure not being fulfilled?
Who is responsible for this duty?
Regulators do not regulate tools.
They regulate certificate holders.
If a spreadsheet contains an error, the spreadsheet is not held accountable.
If a software platform miscalculates a metric, the platform is not cited.
The finding is written against the organization - and ultimately against the person assigned responsibility.
AI is simply the newest tool in the toolbox.
It does not change where accountability sits.
The Regulatory Reality: Duties Belong to Persons
In aviation SMS frameworks, duties are assigned to persons.
Quality Assurance functions must be fulfilled by persons.
Accountable Executives are persons.
Safety Managers are persons.
Certificate holders are persons or organizations represented by persons.
An AI system:
Cannot hold a certificate.
Cannot be designated as Accountable Executive.
Cannot fulfill a regulatory duty.
Cannot appear in an enforcement action.
Cannot sign a corrective action plan.
If something goes wrong, no regulator will interview your algorithm.
They will speak to the accountable person.
Transport Canada may not yet have issued a position paper on AI in SMS. But they do not need to in order for this principle to apply. The accountability framework already exists. AI operates within it - not outside it.
When AI Is Wrong
AI does not “know” regulations. It predicts likely text based on patterns in training data and user input.
That means:
It can provide inaccurate regulatory references.
It can generate confident but incorrect analysis.
It can fill gaps with plausible-sounding information.
It can misinterpret context.
It reflects the quality of input and configuration.
I have personally encountered AI-generated references that were incorrect - even when my input was clear. That does not make AI dangerous. It makes it a tool that requires review.
When I deliver a course, a report, or a regulatory analysis, the responsibility is mine. If there is an error, it reflects on me - not on the software I used.
Even Microsoft Word will not catch every mistake if the word itself is technically legitimate.
The signature belongs to a person.
That principle does not change because the drafting tool is more advanced.
A Flight Simulator Lesson
Let me share a recent experience.
After several years away from flight simulation, I returned - this time with a new computer powerful enough to run the latest version of Flight Simulator 2024. I was excited to set up a multi-screen environment and re-immerse myself.
Everything appeared configured correctly.
Yet I could not interact with the instruments.
The mouse settings were correct. I had selected the proper control mode. Still, nothing worked.
It would have been easy to blame the software.
But the issue was mine.
In my rush to configure the visuals and expand to a second screen, I had misconfigured something in the setup process.
The software was functioning exactly as configured.
I was the variable.
AI in aviation safety management works similarly.
If an AI system produces inaccurate analysis, we must ask:
Was it properly configured?
Was the input appropriate?
Was the output reviewed?
Do we understand its limitations?
Do we have checks and balances in place?
And most importantly:
If it were wrong, would we know?
Technology can only support safety management if leadership understands how it works - and how it fails.
Auditability and SMS Compliance
Executive oversight requires more than enthusiasm for innovation.
It requires governance.
If AI is used in any part of your SMS - drafting reports, analyzing data, supporting trend analysis - consider:
Is AI-generated output reviewed by a qualified person?
Is that review documented?
Is AI use described in your Safety Management Manual, if applicable?
Does your Quality Assurance program include oversight of AI-supported processes?
Can you demonstrate traceability of decisions?
Can you defend the outcome during an audit?
If you cannot audit it, you should not rely on it for compliance-critical functions.
AI may accelerate processes.
It may enhance analysis.
It may improve efficiency.
But it does not transfer responsibility.
Technology Does Not Absorb Accountability
Automation can fly an aircraft.
The pilot remains responsible.
AI may assist your Safety Management System.
But it will never replace your responsibility.
As executives, the question is not whether AI will enter aviation safety management. It already is.
The question is whether it will be governed appropriately - or assumed to be correct without validation.
If you are exploring AI within your SMS, now is the time to ensure:
Clear governance
Defined boundaries of use
Documented human oversight
Auditability
Alignment with regulatory requirements
Technology should strengthen compliance - not obscure it.
Call to Action
If your organization is considering integrating AI into safety reporting, data analysis, or SMS processes - or if you simply want to ensure your current systems reflect regulatory expectations - we should talk.
At Acclivix, we help aviation organizations:
Review and strengthen SMS governance frameworks
Align safety management manuals with operational reality
Ensure QA processes remain effective as tools evolve
Evaluate technology platforms to confirm they support - not compromise - compliance
AI is a tool.
Accountability is a leadership function.
Let’s make sure yours is clear. Reach out to us to start a conversation today.