From Waste to ROI: A Practical Framework for Problem-First AI Success

March 23, 2026 Data & AI

A few years ago, I sat in a sleek boardroom in Kinshasa, watching a demonstration of what was supposed to be a "revolutionary" predictive maintenance system for a petrol firm. The model was a beauty, a deep learning architecture I’d spent weeks tuning to a 98% accuracy rate on my local GPU. I was proud. I was also, as it turned out, completely wrong.

When we moved that model from the lab to the field, it hit the wall of reality. The local data quality was inconsistent, the latency over mobile networks in Haut-Katanga made real-time alerts impossible and the actual mechanics on the ground didn't trust a "black box" telling them their trucks were broken when they looked fine.

In the academic world, for example, "better" means a higher F1 score. But, when we go into the world of software engineering and business strategy, "better" means someone’s job just got 20% easier or your margin just grew by 5%. If you aren't solving a specific, painful problem, you aren't doing AI, you're just expensive hobby-ing.

The Pilot Purgatory Trap

Most companies are currently stuck in "Pilot Purgatory." You know the symptoms: a dozen "Proof of Concepts" (PoCs) that look great in a controlled demo but never make it to production. Why? Because they were "Tech-First" projects. Someone said, "We need to use Generative AI," and then went looking for a place to stick it.

This is where the waste happens. Millions of dollars are being poured into GPU credits and consultant fees for tools that nobody actually uses. So, for example in the DRC and some countries where infrastructure can be a challenge, we don't have the luxury of wasting resources on "vibe-based" engineering. If a tool doesn't work under the constraints of connectivity, it doesn't work at all.

The Problem-First Framework

To move from waste to ROI, we would need to flip the script. Stop asking what the model can do and start asking what the business cannot do right now. Here are the three steps I use to qualify a high-value use case.

1. The Why Audit

If you can’t describe the problem you’re solving without using the words AI, Machine Learning or LLM, you don’t have a business problem yet. You have a tech crush.

A few months ago, a technical school in Kinshasa reached out with a clear request: they wanted facial recognition technology to automate student attendance tracking. "Like what we see in the news," they explained.

A visit to the school revealed something else. Every morning, teachers circulated a paper sheet for students to sign. Students signed for absent friends. Records were incomplete and unreliable by midday. The problem wasn't lack of facial recognition. The problem was paper.

The solution took an afternoon to build. Teachers now open a shared link on their phones, check a few boxes and submit. Attendance data lives in one spreadsheet, accessible to administrators in real time. No cameras installed. No models trained. No students misidentified. Attendance tracking works and the school spent nothing on AI and got everything they needed.

2. Data Realism over Data Optimism

We often hear that Data is the new oil. Well, in my experience, most corporate data is more like unrefined sludge. I’ve seen projects fail because the perfect model required historical data that simply didn't exist in a clean format.

Before you write a single line of PyTorch, ask: Do we have the signal needed to solve this? In local contexts, this often means dealing with "small data" or "messy data." A pragmatic engineer knows that a simple, robust heuristic often beats a complex neural network when the input quality is shaky.

3. The Human-in-the-Loop Reality Check

AI success is 20% code and 80% adoption. If your solution adds three extra steps to a worker’s workflow, they will bypass it. (I’ve seen it happen more times than I care to admit). You have to design for the user who is tired, skeptical or working on a phone with a cracked screen in 35°C heat. ROI comes from the friction you remove, not the complexity you can add.

ROI vs. Hype: Measuring What Matters

You’ll perhaps see that your board doesn't care about your model’s perplexity score. They care about the bottom line. To prove value, we have to stop reporting technical metrics and start reporting business ones.

  1. Latency and Reliability: In Kinshasa, a model that takes around 30 seconds to respond is a failure, regardless of how smart this model is.
  2. Token Cost vs. Value: If you’re using a high-end LLM to categorize $1 emails, you’re losing money.
  3. User Retention: Are people actually using the tool a month after the cool factor wears off?

Sometimes, the most senior AI thing you can do is to figure out that in some cases you don't need AI for a specific task. I once saved a client months of work by replacing a planned AI recommendation engine with a well-structured SQL query. They didn't get a flashy headline however they got a working product in two days.

 

Stop Playing, Start Solving

We are living through a massive technological shift, but the rules of engineering haven't changed. Complexity is still the enemy. Value is still the goal.

If you are a decision-maker, my challenge to you is this: Look at your current list of AI initiatives. How many of them are solutions looking for a problem? Be ruthless. Get straight to the point. Focus on the gritty, boring, high-impact problems that actually keep your operation running.

So, let’s stop chasing the hype and start shipping solutions that matter.

Willy Muteba

Willy Muteba is an AI Researcher and Software Engineer based in the Democratic Republic of the Congo (DRC). With experience spanning academic research and production-grade software development, he focuses on bridging the gap between theoretical machine learning and systems that work in practice. He has advised businesses and academic institutions on AI strategy, data infrastructure and technology adoption across Central Africa. Willy helps organizations navigate the technical and infrastructural realities of the African tech landscape, building resilient solutions that deliver measurable impact across sectors. His approach is grounded in a simple principle: technology must solve real problems, not chase trends.

 

Read next Fine-Tuning Step by Step: How to Customize Your AI Models