top of page
Data Sentinels AI tool evaluation framework

PROPRIETARY FRAMEWORK

The SPACE Test.

A five-criteria framework for evaluating AI tools before deployment. Every AI tool considered in a Data Sentinels engagement must pass all five criteria before it is recommended. No exceptions.

DEVELOPED BY

Nono Bokete, CEO, Data Sentinels

PURPOSE

Separate signal from expensive noise

APPLIED TO

Every AI tool evaluation

DEFINITION

The SPACE Test is a five-criteria framework for evaluating AI tools before deployment in an organisation. Developed by Nono Bokete, CEO of Data Sentinels, it stands for: Solves a Real Problem, Practical, Accurate, Compatible, and Ethical.

The framework was built after observing a consistent pattern in failed AI deployments: organisations choosing tools on enthusiasm rather than evidence, adopting without governance, and abandoning within months. The SPACE Test forces the right questions before a single licence is purchased.

Most AI adoption failures are not caused by bad technology. They are caused by organisations deploying tools before asking the right questions.

The five criteria below are those questions: applied in sequence, with no exceptions, before any AI tool is recommended in a Data Sentinels engagement.

S

Solves a Real Problem

Is there a documented business challenge this tool addresses?

Not a hypothetical problem. Not a problem someone read about in a trade publication.

A problem that costs money today: where the specific decision this tool will improve can be named, and where the people who own that decision agree it is a priority.

If the team cannot name the specific decision this tool will improve, it does not pass.

Common failure pattern

Tool chosen because a competitor deployed it, or because it appeared in a Gartner report, without a documented link to a specific operational or commercial problem.

P

Practical

Can your team realistically adopt and sustain this tool without ongoing external support?

A tool that requires a data science team to maintain, when no such team exists, is not practical. Practicality means the tool fits the actual capability, workflows, and capacity of the organisation as it exists today, not as it might exist after a separate capability-building programme.

Common failure pattern

Tool requires ongoing model retraining, prompt engineering, or infrastructure management that no internal team has the capacity or training to own. Becomes consultant-dependent within six months.

A

Accurate

Does it produce outputs that can be trusted and verified?

The key questions are: what is the error rate, what happens when it is wrong, and who is accountable for bad outputs?

If those questions cannot be answered clearly before deployment, the tool has not passed.

Accuracy is not just about model performance metrics: it is about whether the organisation has the means to detect and respond to failures.

Common failure pattern

Tool deployed on vendor benchmark data that does not reflect the organisation’s actual data environment. Outputs trusted without verification until a material error creates a costly decision.

C

Compatible

Does it fit your existing data architecture, systems, and workflow?

A tool that requires a complete data infrastructure rebuild before it can function is not compatible: it is a separate, larger project.

Compatibility means the tool can be integrated into the current data environment without requiring a foundational rebuild as a prerequisite.

If the data foundation is not ready, the right sequence is to fix the foundation first.

Common failure pattern

Integration cost and timeline discovered post-procurement. Tool sits unused while a data migration or infrastructure project runs for 12 to 18 months.

E

Ethical

Does it meet your governance, privacy, and bias standards?

The critical question is: who is accountable when it makes a bad decision?

If that accountability is unclear before deployment, the tool should not be deployed.

Ethical compliance is not a legal checkbox: it is a governance question. Organisations that cannot answer the accountability question have not built the governance layer that makes responsible AI adoption possible.

Common failure pattern

Tool deployed without a defined accountability structure. When a bad output causes a downstream decision error, no one owns the failure and no remediation process exists.

HOW IT IS APPLIED

Five Questions. Applied Before Any Build Starts.

The SPACE Test is not a post-deployment review.
It is applied before any AI tool is purchased, built, or recommended. 
If a tool cannot pass all five criteria in the context of a specific client organisation, it is not the right tool for that engagement.

The test is not binary. Each criterion is assessed in the context of the organisation’s current data readiness, capability, and governance posture. A tool that passes for one organisation may not pass for another with different infrastructure or internal capacity.

See the AI Tool Evaluation engagement brief

01

Define the problem, not the tool

Before evaluating any tool, the specific business problem is documented. What decision needs to improve, who owns it, and what does a better outcome look like in measurable terms.

02

Assess data and capability readiness

Practical and Compatible criteria are evaluated against the organisation’s current state: internal team capacity, data quality, and infrastructure.

03

Evaluate candidate tools against all five criteria

Each shortlisted tool is scored against S, P, A, C, and E in the specific context of the client. Results are documented in a scored vendor comparison.

04

Deliver a board-ready recommendation

The output is a clear recommendation with scoring rationale, risks flagged for each shortlisted tool, and a governance framework covering accountability and failure management.

Also from Data Sentinels: The FILTER Framework for evaluating which transformation initiatives deserve leadership attention and capital.

FREQUENTLY ASKED

Questions About the SPACE Test

Common questions from leadership teams evaluating AI governance frameworks and how to apply structured evaluation before deployment.

There was a technical issue on our end. Try again or refresh.

APPLY THE FRAMEWORK

Your AI Tools Should Pass This Test.

If you are evaluating AI tools or auditing those already deployed, the AI Tool Evaluation engagement applies the SPACE Test to your specific context and delivers a board-ready recommendation.

bottom of page