Who Governs Your Company's AI? The Board Meeting Nobody Is Having | Data Sentinels
- Nono Bokete

- Mar 20
- 3 min read

Your HR team is using AI to screen CVs. Your finance team is using AI to flag anomalies in reporting. Your customer service function has deployed a chatbot that handles a significant percentage of first-contact queries. None of this was in last year's board strategy presentation.
This is not unusual. In most organisations, AI deployment is happening faster than the governance conversation is. Business units are making individual decisions, often good decisions, made in good faith, without a board-level framework for who owns the question of how AI is governed across the whole organisation.
That gap has a cost. And most boards have not calculated it yet.
What the Governance Gap Actually Costs
The first cost is bad decisions. When AI deployment is happening without centralised governance, the organisation is making many small decisions that collectively add up to a strategy it never chose. Tools are selected on vendor relationships. Use cases are defined by whoever had the budget and the initiative. The aggregate picture is incoherent.
The second cost is regulatory exposure. Regulators are not waiting for boards to catch up. AI-specific regulation is developing in most major jurisdictions. Organisations that cannot demonstrate clear governance defined ownership, documented decision rights, evidence of regular review, are carrying regulatory risk that is already materialising in the market.
The third cost is wasted investment. AI tools that are deployed without governance are, in my experience, significantly more likely to be abandoned. Not because the technology failed. Because nobody was governing whether the deployment was working or whether the organisation was using the capability it had paid for.
Who Should Own This at Board Level?
There are three options, and each has trade-offs. The first is the CEO, who has the authority and strategic ownership but may not have the data and AI literacy to make it more than a governance formality.
The second is a board sub-committee, audit and risk, or a technology committee — with defined AI governance responsibilities. This works when the committee has the mandate and the expertise to ask the right questions. It fails when it becomes another compliance checkbox.
The third is an executive sponsor at C-suite level, a Chief Data Officer, or in organisations that do not have one full-time, a Fractional CDO, with a direct reporting line to the board on AI governance. This is the structure I have seen work best. Clear ownership, right level of expertise, regular board visibility.
What Fixing It Looks Like
Start with a governance audit. Map every AI tool currently deployed across the organisation. Identify who owns each deployment, what the decision rights are, and when it was last reviewed. Most organisations find this exercise reveals a picture that surprises them.
Then assign ownership. Not shared ownership. Not a committee that owns it collectively. One person or function with clear accountability and the authority to act on it.
Then define the cadence. Quarterly review at minimum. What is deployed, how it is performing, what risks have emerged, what decisions need to be made. Not as a compliance exercise, as a strategic one.
The board meeting nobody is having is not complicated to start having. What it requires is the decision to treat AI governance as a board-level responsibility rather than a technology team problem. The organisations that make that decision now will be significantly better positioned than those that make it after the first serious failure.
If your board is ready to have this conversation, we are ready to help. Contact us info@data-sentinels.




Comments