Executive Summary
Why this matters: AI adoption is no longer a deliberate decision. It’s happening passively through AI in business tools in Nigeria your organisation already uses.
What to know: Microsoft 365 Copilot, Google Workspace AI features, and embedded assistants are already available to your staff. Many are using them without formal approval.
What to do: Audit what AI features are already active in your existing software subscriptions. Determine who has access and whether any governance applies
Watch out for: Assuming you’ll ‘adopt AI’ later as a strategic initiative. The adoption has already begun without you.
AI Is No Longer a Pilot Project — It’s Showing Up
The situation
There is a common mental model of AI adoption that goes something like this: At some point, leadership will decide to ‘adopt AI.’ There will be a strategy. A pilot project. A vendor selection. A careful rollout. Everything deliberate and controlled.
This mental model is already outdated.
AI is not waiting for your strategy session. It is not waiting for budget approval. It is not waiting for the IT department to evaluate vendors. It is already here — embedded inside the
tools your organisation uses every day.
Microsoft 365 now includes Copilot features across Word, Excel, PowerPoint, Outlook, and Teams. Google Workspace has Gemini integrated throughout. Zoom has AI companions.
Slack has AI summaries. The productivity software your organisation has used for years is quietly becoming AI-powered.
The question is no longer ‘should we adopt AI?’ The question is ‘do we know what AI our people are already using?’

What this means for leaders
This shift has profound implications that most leadership teams have not fully absorbed.
AI adoption is happening without decisions. AI is in business tools in Nigeria. Your finance team may already be using Copilot in Excel to analyse data. Your marketing team may be using AI writing assistants built into their email. Your HR team may be using AI features in your applicant tracking system. None of this required a strategic decision. It came bundled with software you were already paying for.
Convenience is outpacing governance. These embedded AI features are designed to be frictionless. They appear as helpful suggestions. They offer to summarise meetings, draft responses, analyse patterns. Employees use them because they make work easier — not because they evaluated the data implications or received approval.
Familiar tools now carry unfamiliar risks. When an employee pastes customer information into a standalone AI tool, they might hesitate. But when the AI is inside Excel —
the same Excel they have used for years — the sense of risk diminishes. The tool feels safe because the interface is familiar. But the data handling may be completely different.
Consider a specific scenario: Your accounts team has used Excel for a decade. Now Excel has Copilot. Copilot can analyse spreadsheets, find patterns, and generate insights. To use it, your team simply highlights data and clicks a button. They do not think of this as ‘using AI’ — they think of it as ‘using Excel.’ But the data they highlight may be processed by external
AI systems, governed by terms of service they have never read.
Did anyone approve this? Does anyone know it is happening?
The Nigerian context
This dynamic is accelerating in Nigeria specifically.
Microsoft has been expanding its AI offerings across Africa, with Nigerian businesses gaining access to Copilot features through existing Microsoft 365 subscriptions. Local-language support is improving, with AI tools beginning to handle Yoruba, Igbo, and Pidgin in various contexts. The technology is becoming more accessible, not less.
At the same time, many Nigerian organisations lack the governance infrastructure to manage this shift. There is no AI policy. There is no clarity on what data can be processed by embedded AI features. There is no training on how these tools work or what risks they carry.
The result is a widening gap between capability and governance. The tools are available. The controls are not.
This is not a criticism — it is a description of reality. Most organisations globally are in a similar position. But recognising the reality is the first step toward addressing it.
The tensions to navigate
Embedded AI creates genuine tensions that do not have easy answers:
Convenience versus control. These features make employees more productive. Restricting them feels like taking away useful tools. But unrestricted use means ungoverned data processing.
Speed versus governance. Governance takes time. AI features ship faster than policies can be written. By the time you have a policy, three new features have launched.
Familiar tools versus unfamiliar risks. Employees trust software they have used for years. That trust does not automatically extend to new AI capabilities embedded within that software — but employees may not distinguish between the two.
There is no perfect resolution to these tensions. But ignoring them is not a strategy.
Practical takeaway
The immediate action is not to create a comprehensive AI strategy. It is simpler and more urgent: find out what is already happening.
- Audit your existing software. What AI features are included in tools you already pay for? Microsoft 365, Google Workspace, your CRM, your HR system — check what AI capabilities are active by default.
- Ask your teams directly. Not ‘are you using AI?’ but ‘are you using any features that summarise, suggest, analyse, or generate content?’ Many employees do not think of embedded features as ‘AI.’
- Review terms of service. How is data processed by these embedded AI features? Is it used for training? Is it stored externally? The answers may not be reassuring.
- Identify your exposure. What sensitive data might be flowing through these features? Customer information? Financial data? Strategic plans? Where is your highest risk?
Risks or limitations
There is a risk that this framing creates panic. The goal is not to make you afraid of your own software. Most embedded AI features are designed with reasonable data protections, particularly in enterprise-grade tools.
But ‘reasonable’ is not the same as ‘appropriate for your specific context.’ And ‘designed with protections’ is not the same as ‘governed by your policies.’
The point is not that embedded AI is dangerous. The point is that AI adoption is no longer something you decide — it is something that is happening to you. Whether you respond thoughtfully or reactively is still within your control.
If this raises uncomfortable questions, that is intentional. Discomfort is often the beginning of useful action.
NEXT IN SERIES: The next question is: how do organisations that take AI risk seriously actually respond? It turns out, some industries have been forced to figure this out. See what Nigerian banks understand about AI risk that most companies don’t.


