Executive Summary
Why this matters: Your organisation is already using AI. The question is whether you know about it — and whether you have any control over what data is being shared.
What to know: Studies show over 70% of employees use AI tools at work, often without telling management. This is not malicious; it is rational. But it creates serious risks.
What to do: Conduct an honest audit. Ask what tools staff are using. Create a path to legitimacy rather than driving usage underground.
Watch out for: Assuming a policy alone will solve this. People will find workarounds. You need governance, not just rules.
What Your Team Is Already Doing with AI (And Why That’s a Problem)
The Situation
Here is a scenario that plays out in Nigerian organisations every day:
A marketing manager needs to write a proposal. Deadline is tight. They open ChatGPT, paste the client brief — including company financials and strategic priorities — and ask for a first draft. Fifteen minutes later, they have something to work with.
A customer service representative is overwhelmed with complaints. They copy customer messages into an AI tool to generate faster responses. Names, account numbers, and complaint details are included.
An HR officer needs to evaluate CVs. They upload candidate files to an AI screening tool they found online. Personal information, salary history, and references are processed by servers they know nothing about.
In each case, the employee is trying to do their job better. They are not malicious. They are not careless. They are rational actors using the tools available to them.
The problem is that you probably do not know this is happening. And even if you suspected it, you have no visibility into what data is being shared, where it is going, or what risks are accumulating.

What this means for leaders
The phenomenon has a name: shadow AI. It refers to AI tool usage that happens outside official channels, without organisational oversight or approval.
Shadow AI is not a fringe problem. Multiple studies suggest that over 70% of employees have used AI tools for work tasks, and the majority do so without telling their managers.
Why? Because the tools are useful. Because asking permission takes time. Because employees are not sure what the rules are. Because, in many organisations, there are no rules.
The risks compound across several dimensions:
Data exposure. When staff paste information into AI tools, that data is typically processed on external servers. Free tools often have permissive terms of service that allow providers to use inputs for training. Your confidential client information, your strategic plans, your employee records — all potentially absorbed into systems you do not control.
NDPA compliance gaps. The Nigeria Data Protection Act 2023 requires explicit consent for processing personal data, particularly when data is transferred outside Nigeria. When an employee uploads customer information to a foreign AI service, the organisation may be violating data protection law without knowing it.
Inconsistent outputs. Different staff using different AI tools produce inconsistent results. One department’s AI-assisted analysis contradicts another’s. Quality varies wildly. The organisation loses coherence without understanding why.
Hidden dependencies. Work products start depending on AI tools that could change, disappear, or become paid services. The organisation has no inventory of these dependencies and no continuity plan when they break.
Why prohibition does not work
The instinctive response is to ban AI tool usage until proper policies are in place. This rarely works.
People will find workarounds. They will use personal devices. They will access tools through personal accounts. The usage will continue; it will just become less visible.
Prohibition also creates a false sense of security. Leadership believes the problem is solved because a policy exists. Meanwhile, actual behaviour continues unchanged.
The more effective approach is to acknowledge the reality and create a path to legitimacy.
Practical takeaway
- Conduct an honest audit. Ask staff what AI tools they are using. Not in a punitive way, but as genuine inquiry. Create amnesty for disclosure. You cannot govern what you cannot see.
- Categorise data sensitivity. Not all AI usage is equally risky. Using AI to summarise public information is different from using it to process customer PII. Create clear categories so staff understand where the real boundaries are.
- Provide approved alternatives. If staff need AI tools to do their jobs efficiently, give them legitimate options. Enterprise versions with proper data handling. Approved tools with clear guidelines. The goal is to channel usage, not eliminate it.
- Create simple, memorable rules. Complex policies get ignored. Simple heuristics get followed. “Never paste customer names or account numbers into AI tools” is more effective than a 20-page acceptable use policy.
- Make governance ongoing. Shadow AI is not a problem you solve once. New tools appear constantly. Staff capabilities evolve. Regular check-ins and updated guidance are essential.
Risks or limitations
This post may create alarm. That is partially intentional — the problem is often worse than leaders assume. But alarm without action is counterproductive.
The goal is not to make you afraid of AI. The goal is to shift the question from “should we adopt AI” to “how do we govern what is already happening.”
Your organisation is using AI. The only question is whether that usage is visible, governed, and aligned with your interests — or invisible, ungoverned, and accumulating risk.



