Executive Summary
Why this matters: Your AI conversations may be failing before they start. Different people in your organisation mean different things when they say “AI.”
What to know: Misalignment shows up as stalled pilots, unadopted tools, and policies that arrive too late. The problem is rarely the technology.
What to do: Before your next AI discussion, clarify: What problem are we solving? Who carries risk? How will we decide?
Watch out for: Assuming everyone shares your understanding. They probably do not.
Why Most AI Conversations in Nigerian Organisations Are Misaligned
The Situation
Every organisation in Nigeria seems to be talking about AI. Board meetings. Strategy retreats. WhatsApp groups. LinkedIn feeds. The conversation is everywhere.
But here is the problem: very few organisations are actually talking about the same thing.
When a CEO mentions AI, they might be thinking about cost reduction. When the CFO hears it, they worry about budget requests they cannot evaluate. When the CTO speaks up, they are thinking about infrastructure and integration. When middle managers listen, they hear a threat to their relevance.
Everyone nods. Everyone agrees AI is important. And everyone leaves the room with completely different assumptions about what happens next.
This is the misalignment problem. It is not a technology problem. It is a communication problem that technology makes worse.

What this means for leaders
Misalignment does not announce itself. It hides behind enthusiasm and consensus. Everyone seems to agree, so the project moves forward. Then reality arrives.
The signs are familiar to anyone who has watched an AI initiative stall:
Pilots that never scale. A team runs a successful experiment. Leadership celebrates. Then nothing happens. The pilot sits in a folder somewhere, never integrated into actual operations. Why? Because the pilot answered a question nobody in leadership was actually asking.
Tools that never get adopted. The organisation purchases an AI solution. Training sessions are scheduled. Emails are sent. Six months later, usage is minimal. The tool works fine. But it solves a problem that was not a priority for the people expected to use it.
Policies that arrive after decisions. Legal and compliance draft an AI policy. By the time it is approved, three departments have already deployed tools, shared data externally, and created precedents that the policy now has to accommodate rather than guide.
In each case, the technology performed as expected. The failure was upstream. People were not aligned on what problem AI was meant to solve, who would be responsible when things went wrong, or how decisions should be made in a space where capabilities change faster than governance.
Why this happens
AI is not like other technology decisions. When you buy accounting software, everyone understands what it does. When you implement a CRM, the use case is clear. AI is different because it is a capability, not a product.
Saying “we need to adopt AI” is like saying “we need to adopt electricity.” It sounds meaningful but tells you nothing about what you will actually do with it.
This ambiguity creates space for projection. Each stakeholder fills the gap with their own assumptions, hopes, and fears. The CEO imagines transformation. The operations head imagines efficiency. The HR director imagines headcount reduction. The junior analyst imagines job security threats.
All of these interpretations can be true simultaneously. That is the problem. Without explicit alignment, everyone is working toward a different destination while appearing to move together.
Practical takeaway
Before your next AI discussion, pause and align on three questions:
- What specific problem are we trying to solve? Not “adopt AI” but “reduce customer response time by 40%” or “automate invoice processing.” If you cannot name the problem in plain language, you are not ready to discuss solutions..
- Who carries risk when something goes wrong? AI systems make mistakes. When they do, who is accountable? If the answer is unclear, you have a governance gap that will become a crisis later.
- How will we decide to stop? Every pilot should have exit criteria. What would failure look like? What would make us pull the plug? If you cannot answer this, you are not running an experiment. You are making an open-ended commitment.
These questions are uncomfortable. They slow things down. That is the point. Alignment takes time upfront but saves months of wasted effort downstream.
Risks or limitations
Alignment is necessary but not sufficient. You can have perfect clarity on the problem and still fail at execution. But without alignment, execution is almost guaranteed to drift.
There is also a risk of over-engineering alignment. If every AI conversation requires a formal alignment process, nothing will ever move. The goal is appropriate alignment — enough shared understanding to proceed without constant renegotiation, but not so much process that momentum dies.
The organisations that get this right treat alignment as ongoing, not one-time. They revisit the questions as context changes. They expect misalignment to resurface and address it when it does.
This is the work before the work. It is less exciting than deploying tools. But it is where most AI initiatives succeed or fail.



