Summary
AI success is no longer about what the model can do, but whether your organization can prove control, explain decisions and defend every AI outcome in production.
Three key takeaways:
It’s about whether your organization can stand behind those answers, defend the decisions and prove, end to end, that you were in control the whole time.
There was a time when “AI risk” still felt like a future tense problem, something you could park under innovation, pilot it in a safe corner of the business and promise yourself you’d tighten the controls later. That time has passed.
In the latest RegCast conversation with PJ Giammarino and Richard Kemp, the theme that kept coming up, whether we were talking about regulatory enforcement, vendor contracts or the hard reality of production AI, was simple: supervisors have stopped asking what AI can do, and started asking what it did, when it did it and whether you can prove it.
That is the real change. AI has crossed a line. It’s no longer just an innovation story—it’s a capital story, a national security story and, most importantly, a board accountability story.
The recent enforcement signals are hard to ignore. Across Europe and beyond, regulators are moving from guidance to penalties, and from “"show me your policies"” to “"show me the controls working in real life."” The fines and sanctions that have made the headlines aren’t just cautionary tales; they’re proof that the enforcement window is open. And in many cases, the regulator’s frustration isn’t aimed at flashy AI misuse, it’s aimed at something more fundamental: organizations are failing to demonstrate that they are in control of their systems.
Richard Kemp made the legal position clear. Liability won’t ever sit with “the model.&rdquo—it sits with the regulated entity, and it escalates quickly to accountable senior management. Under frameworks like DORA, responsibility can’t be delegated away through organizational charts or procurement paperwork. If your AI system causes a breach, the question becomes whether governance was embedded, whether risk management was real and whether you can evidence that the right decisions were made in the right way, at the right time.
That word—evidence—is where the conversation gets uncomfortable for most organizations, because it exposes the gap between what we claim and what we can prove. In AI, proof isn’t a retrospective story you assemble for an audit. It has to be generated in production. It has to be replayable.
From a technology perspective, that’s the shift I care about most. Regulators aren’t asking whether you have AI. They’re asking whether you can reconstruct an event deterministically enough to defend it, including:
- What data was accessed?
- Was it permitted?
- What policy was enforced at runtime?
- Which model and version were used?
- What parameters were set?
- Which vendors and subprocessors were in the path?
- What retrieval context and reasoning trace led to the output?
- And critically, if you run the same scenario again, could you explain what would happen today and why?
If you can’t answer those questions quickly, you don’t have an AI governance problem. You have an operational risk problem.
And the operational complexity is only getting worse. Modern AI systems don’t exist in a neat, controllable box. They sit across a supply chain of model providers, cloud providers, APIs, orchestration tools, data platforms and the increasing agentic layer where systems can plan and execute actions without step-by-step human instruction. That is why “contracts alone” are no longer sufficient. You can’t paper over a lack of technical enforceability. You can’t outsource accountability. And you can’t rely on the idea that a vendor’s standard terms, often changeable by a quiet website update, will protect you when supervisors come knocking.
This is also why the “probabilistic” nature of AI matters so much. In regulated environments, you’re often asking deterministic questions—those that require the same answer today, tomorrow or during a supervisory review six months from now. That doesn’t mean you can’t use AI. It means you must pair it with deterministic guardrails: policies, access controls, lineage, provenance, monitoring, escalation paths and human-in-the-loop checkpoints that make the system defensible. I call this “boring AI,” not because it’s less capable, but because it’s reliable predictable, auditable and, most importantly, ready to run.
That is the point I want people to walk away with ahead of the RegTech Conference in London on 26 March: the organizations winning with AI won’t be the ones with the most impressive demos. They’ll be the ones who can prove control. They’ll be able to show, not merely state, that their AI used approved data, applied approved policy at the moment of use, produced traceable outputs grounded in real sources and behaved within clearly defined constraints. They’ll have replayable decision chains engineered into the system, not reconstructed after the fact.
There is a real business case here, and it’s being missed in the noise. If you get this right, you can deploy AI and agents at scale, safely and with measurable ROI. If you get it wrong, you don’t just risk a model failure, you risk a governance failure, a reporting failure and, ultimately, a leadership failure, because accountability is moving up the chain.
That’s why I’m looking forward to the conversations in London. We need legal, risk, compliance and technology leaders in the same room, working from the same reality: enforcement is here, agentic systems raise the stakes and “trust” is no longer a brand value, it’s an operational requirement. The next era isn’t about whether AI can answer questions. It’s about whether your organization can stand behind those answers, defend the decisions and prove, end to end, that you were in control the whole time.
If your AI can’t be replayed, explained and defended, it isn’t ready for production. And if it isn’t ready for production, it isn’t ready for a regulator.
See you in London on 26 March for my panel session, Who Controls the Agents? Designing AI for Accountability.
Philip Miller
Philip Miller serves as an AI Strategist at Progress. He oversees the messaging and strategy for data and AI-related initiatives. A passionate writer, Philip frequently contributes to blogs and lends a hand in presenting and moderating product and community webinars. He is dedicated to advocating for customers and aims to drive innovation and improvement within the Progress AI Platform. Outside of his professional life, Philip is a devoted father of two daughters, a dog enthusiast (with a mini dachshund) and a lifelong learner, always eager to discover something new.