Executive Summary
Why this matters: AI creates new types of decisions that don’t fit traditional org charts. Without clear role ownership, AI initiatives become everyone’s idea and no one’s responsibility.
What to know: Three roles are essential: decision owner (who can say yes/no), implementation owner (who makes it work), and risk owner (who monitors for problems). Most organisations cannot name all three.
What to do: Before any AI initiative, name all three owners explicitly. If you cannot name them, you are not ready to proceed.
Watch out for: Assuming existing roles cover AI decisions. They rarely do. AI requires new clarity.
Who Decides What? Role Clarity and AI Adoption
The Situation
Your organisation has an org chart. It defines who reports to whom, who manages which department, who has authority over which budget. It works well enough for routine decisions.
Then someone proposes an AI initiative. And suddenly, the org chart reveals its gaps.
Who decides whether to proceed? The CEO is interested but busy. The CTO understands the technology but not the business case. The CFO controls the budget but does not understand AI. The business unit head wants it but cannot evaluate the risks.
Everyone has a role. No one has the role.
This is not a Nigerian problem or an African problem. It is a universal challenge. AI creates new types of decisions that do not fit traditional organisational structures. Without explicit clarity about who owns what, initiatives drift — or worse, multiple people make conflicting decisions without realising it.

Why role clarity matters for AI
AI is different from previous technology decisions in ways that matter for governance.
AI systems make recommendations. Someone must decide whether to follow them. When the AI says ‘reject this loan application’ or ‘flag this employee for review,’ a human must decide whether to act on that recommendation. Who is that human? What authority do they have? What happens if they disagree with the AI?
AI systems access data. Someone must authorise that access. When an AI tool requires customer records, transaction histories, or employee files, who decides whether to grant access? Under what conditions? With what oversight?
AI systems fail. Someone must be accountable when they do. Not ‘the team’ or ‘the vendor’ — a specific person who can explain what happened, why, and what will change. If this person does not exist, accountability is impossible.
AI systems evolve. Someone must decide when to update, retrain, or retire them. AI is not ‘set and forget.’ Models degrade. Contexts change. Someone must monitor performance and make ongoing decisions about maintenance.
Without clear answers to these questions, AI initiatives operate in a governance vacuum. Decisions get made by whoever moves first. Accountability is assigned after problems emerge. The organisation loses control of its own technology.
The Nigerian context
Nigerian business culture has characteristics that can help or hinder role clarity for AI.
On the helpful side, hierarchical structures mean authority is generally respected. When a senior leader makes a decision, it is followed. This clarity of command can accelerate AI adoption when senior leadership is engaged and informed.
On the challenging side, hierarchy can also create decision bottlenecks. If only the MD can approve AI initiatives, and the MD is busy, nothing moves. Or worse, junior staff implement AI tools without approval because they assume permission would never come.
There is also the relationship dimension. In many Nigerian organisations, informal authority matters as much as formal authority. The person who ‘knows how things work’ may have more influence than the person with the official title. This can be valuable — relationships lubricate decision-making — but it can also obscure accountability. When something goes wrong, who is responsible: the person with the title or the person who actually made the call?
Finally, there is the scarcity challenge. Many Nigerian organisations do not have dedicated roles for data governance, AI oversight, or technology risk. These responsibilities are added to existing roles already stretched thin. The result is diffuse accountability where everyone is partly responsible and no one is fully responsible.
The three roles every AI initiative needs
Regardless of your org structure, every AI initiative requires three distinct types of ownership:
Decision owner. This person has authority to approve or reject the initiative. They can say yes, and they can say no. They are not a committee — they are an individual who can make the call. In most organisations, this is an executive-level role. They do not need to understand the technology deeply, but they must understand the business case, the risks, and the resources required.
Implementation owner. This person is responsible for making the AI initiative actually work. They manage the project, coordinate the team, solve problems, and deliver results. They report to the decision owner and escalate when needed. This is usually a senior operational role — someone who gets things done.
Risk owner. This person monitors for problems and has authority to intervene. They watch for data issues, compliance gaps, unintended consequences, and performance degradation. They can raise concerns that pause or modify the initiative. This is typically a governance, compliance, or risk function — someone whose job is to ask uncomfortable questions.
These three roles can be held by the same person in small organisations. But they must be explicit. ‘We’ll figure it out as we go’ is not a governance structure.
Practical takeaway
Before approving any AI initiative, require answers to three questions:
- Who is the decision owner? Name and title. This person has final authority. If they are not named, the initiative is not ready.
- Who is the implementation owner? Name and title. This person will make it work. If they are not named, the initiative is not ready.
- Who is the risk owner? Name and title. This person will monitor for problems. If they are not named, the initiative is not ready.
For complex initiatives, consider creating a simple RACI chart — who is Responsible, Accountable, Consulted, and Informed for key decisions. This takes an hour to create and prevents months of confusion.
Make ownership visible. Not just documented in a file somewhere, but known by everyone involved. When problems arise — and they will — there should be no ambiguity about who to call.
Risks or limitations
There is a risk of over-engineering governance. If every small decision requires a committee and a RACI chart, nothing will ever move. The goal is appropriate clarity, not bureaucratic paralysis.
For low-risk experiments, light governance is fine. For significant initiatives that touch customer data or make consequential decisions, robust governance is essential. Match the governance weight to the risk level.
There is also the reality that role clarity does not guarantee good decisions. You can have perfect accountability structures and still make poor choices. But without role clarity, you cannot even have honest conversations about what went wrong and why.
Role clarity is not sufficient for AI success. But it is necessary. Without it, everything else is improvisation.
With data visibility and role clarity, you’re ready for the next question: What are you actually trying to do? That means examining your processes — which is where many AI initiatives stumble. Part 3: You Cannot Automate What You Haven’t Defined.


