AI Is No Longer a Pilot Project — It’s Showing Up Inside Existing Tools

This mental model is already outdated.

AI is not waiting for your strategy session. It is not waiting for budget approval. It is not waiting for the IT department to evaluate vendors. It is already here — embedded inside the

tools your organisation uses every day.

Microsoft 365 now includes Copilot features across Word, Excel, PowerPoint, Outlook, and Teams. Google Workspace has Gemini integrated throughout. Zoom has AI companions.

Slack has AI summaries. The productivity software your organisation has used for years is quietly becoming AI-powered.

AI is in Business Tools in Nigeria

AI adoption is happening without decisions. AI is in business tools in Nigeria. Your finance team may already be using Copilot in Excel to analyse data. Your marketing team may be using AI writing assistants built into their email. Your HR team may be using AI features in your applicant tracking system. None of this required a strategic decision. It came bundled with software you were already paying for.

Convenience is outpacing governance. These embedded AI features are designed to be frictionless. They appear as helpful suggestions. They offer to summarise meetings, draft responses, analyse patterns. Employees use them because they make work easier — not because they evaluated the data implications or received approval.

Familiar tools now carry unfamiliar risks. When an employee pastes customer information into a standalone AI tool, they might hesitate. But when the AI is inside Excel —

the same Excel they have used for years — the sense of risk diminishes. The tool feels safe because the interface is familiar. But the data handling may be completely different.

Consider a specific scenario: Your accounts team has used Excel for a decade. Now Excel has Copilot. Copilot can analyse spreadsheets, find patterns, and generate insights. To use it, your team simply highlights data and clicks a button. They do not think of this as ‘using AI’ — they think of it as ‘using Excel.’ But the data they highlight may be processed by external

AI systems, governed by terms of service they have never read.

Microsoft has been expanding its AI offerings across Africa, with Nigerian businesses gaining access to Copilot features through existing Microsoft 365 subscriptions. Local-language support is improving, with AI tools beginning to handle Yoruba, Igbo, and Pidgin in various contexts. The technology is becoming more accessible, not less.

At the same time, many Nigerian organisations lack the governance infrastructure to manage this shift. There is no AI policy. There is no clarity on what data can be processed by embedded AI features. There is no training on how these tools work or what risks they carry.

The result is a widening gap between capability and governance. The tools are available. The controls are not.

This is not a criticism — it is a description of reality. Most organisations globally are in a similar position. But recognising the reality is the first step toward addressing it.

Embedded AI creates genuine tensions that do not have easy answers:

Convenience versus control. These features make employees more productive. Restricting them feels like taking away useful tools. But unrestricted use means ungoverned data processing.

Speed versus governance. Governance takes time. AI features ship faster than policies can be written. By the time you have a policy, three new features have launched.

Familiar tools versus unfamiliar risks. Employees trust software they have used for years. That trust does not automatically extend to new AI capabilities embedded within that software — but employees may not distinguish between the two.

There is no perfect resolution to these tensions. But ignoring them is not a strategy.

The immediate action is not to create a comprehensive AI strategy. It is simpler and more urgent: find out what is already happening.

But ‘reasonable’ is not the same as ‘appropriate for your specific context.’ And ‘designed with protections’ is not the same as ‘governed by your policies.’

The point is not that embedded AI is dangerous. The point is that AI adoption is no longer something you decide — it is something that is happening to you. Whether you respond thoughtfully or reactively is still within your control.

If this raises uncomfortable questions, that is intentional. Discomfort is often the beginning of useful action.

NEXT IN SERIES: The next question is: how do organisations that take AI risk seriously actually respond? It turns out, some industries have been forced to figure this out. See what Nigerian banks understand about AI risk that most companies don’t.