Everyone is complaining about AI pricing right now. Anthropic is tightening usage limits. OpenAI launched a $100/month Pro tier. Third-party tooling economics keep shifting. Users feel like vendors are changing the rules midstream, right as their teams are finally building real workflows around these tools.
The frustration is valid. But most of the conversation is focused on the wrong problem.
The real question isn’t “why is AI getting more expensive?” It’s “why are we burning so many tokens in the first place?”
Here’s the uncomfortable math. Anthropic just hit $30 billion in annualized revenue, surpassing OpenAI’s $25 billion. Model providers are printing money. Meanwhile, Gartner reports that only 28% of enterprise AI use cases in infrastructure and operations fully deliver on their ROI expectations. Research from WRITER’s 2026 Enterprise AI Adoption survey found that 96% of organizations are deploying AI agents, but only 23% see significant ROI from them, while 79% face significant adoption challenges overall.
That asymmetry should stop every enterprise leader in their tracks. Capability investment is running far ahead of the context and governance infrastructure needed to turn that capability into production value. The models are ready. Most organizations’ data layers are not.
This is what the pricing debate is actually about, even if most people don’t realize it yet. AI pricing isn’t broken. It’s reflecting a market where usage is expanding faster than the architecture supporting it.
The largest line item in enterprise AI spend isn’t model access. It’s waste.
Every time you dump an unstructured 50-page document into a prompt and ask a model to “find the relevant parts,” you’re paying the model to do work your data architecture should have handled before the prompt was ever sent. You’re asking a frontier model to be your search engine, your classifier, your policy filter, and your answer generator, all in one expensive inference call.
Put real numbers on it. GPT-4o charges roughly $2.50 per million input tokens. Claude 3.5 Sonnet runs around $3.00. That sounds manageable until you consider what enterprise usage looks like: thousands of queries per day, each one stuffing context windows with raw, unstructured, un-enriched content because nothing upstream is curating what the model actually needs to see.
A single poorly architected RAG pipeline processing 1,000 queries a day with 8,000 tokens of padded context per query burns through 8 million input tokens daily. That’s $20–$24 per day on input alone, for one workflow. Scale that across dozens of AI-powered processes, and you’re looking at six figures annually in token costs that better architecture would have prevented.
The fix isn’t a cheaper subscription tier. It’s better context.
Better context consistently outperforms better models in enterprise environments. A smaller, cheaper model with precisely curated context will outperform an expensive frontier model drowning in irrelevant data, and cost a fraction as much. One pharmaceutical customer found that adding a semantic baseline to their retrieval increased correct answers by 73%, with further refinements adding another 102% improvement. Not by upgrading the model. By upgrading the context.
The enterprises getting AI economics right aren’t negotiating volume discounts on API calls. They’re investing in what happens before the API call: semantic enrichment, intelligent retrieval, policy-based filtering, and structured context delivery. They’re treating their context layer as the primary cost lever, because it is.
This is where the Progress Data Platform architecture earns its keep. The Progress Data Platform is a neurosymbolic architecture that combines the flexibility of neural AI with the precision and control of symbolic reasoning across every layer of the stack. The models bring probabilistic power. PDP brings the semantic meaning, deterministic rules, knowledge relationships, and governed workflows that keep that power grounded, efficient, and trustworthy.
Here’s how each layer directly reduces token waste and AI cost inflation:
This is the neurosymbolic advantage in practice. Neural gives power. Symbolic gives control. The Progress Data Platform gives you both — and the enterprise AI economics that follow.
If you’re feeling the pressure of rising AI costs, here are concrete steps worth considering today.
The AI pricing debate reveals something the market needs to hear: most organizations are still treating AI like a novelty rather than infrastructure. They’re optimizing for model access when they should be optimizing for the architecture around the model.
Production-grade enterprise AI isn’t about chasing the latest model release or negotiating a better API rate. It’s about building a governed, context-aware data layer that makes every model interaction more efficient, more accurate, and more defensible. It’s about making AI boring, reliable, predictable, and economically sustainable at scale.
Gartner’s 28% ROI number isn’t a failure of AI capability. It’s a failure of context, governance, and integration. The 72% of projects that stall or fail aren’t using inferior models. They’re sending inferior context to perfectly capable ones.
The enterprises that will thrive as AI pricing matures aren’t the ones with the best subscription deal. They’re the ones who built the context architecture to make every token count.
Explore the Progress Data Platform to see how a governed context architecture can reduce your AI inference costs while improving the quality and trustworthiness of every output.
Read our Make AI Boring whitepaper to learn how governed context architecture helps reduce token waste, improve output quality, and make enterprise AI more reliable, explainable, and cost-effective.
AI Strategist
Philip Miller serves as an AI Strategist at Progress. He oversees the messaging and strategy for data and AI-related initiatives. A passionate writer, Philip frequently contributes to blogs and lends a hand in presenting and moderating product and community webinars. He is dedicated to advocating for customers and aims to drive innovation and improvement within the Progress AI Platform. Outside of his professional life, Philip is a devoted father of two daughters, a dog enthusiast (with a mini dachshund) and a lifelong learner, always eager to discover something new.
Subscribe to get all the news, info and tutorials you need to build better business apps and sites