The 6 AI Trends That Will Actually Matter Next Year

December 03, 2025 Data & AI, Agentic RAG

Keeping Me Honest—Let’s Look at My 2025 Predictions

Before we begin, I was tasked with a similar article last year, AI Predictions for 2025: Embracing the Future of Human and Machine Collaboration, so keeping a promise made back then, I asked AI how well I did against the current state of things here in 2025:

  • OpenAI: 8 out of 10
  • Gemini: 9 out of 10
  • Claude: 7.5 out of 10
  • Perplexity: 8 out of 10

Then I took their longer summaries and score and asked them to combine into one summary:

“Accuracy 9/10, Overall 8/10—your 2024 call aged remarkably well: You nailed governance/RegTech, collaboration AI and enterprise search moving from talk to tangible, with the only drag being the “agentic coworker” bet where momentum is real, but ROI and maturity are uneven. 2025 is more pilot-and-integration than full deployment, yet everything you flagged is happening. Near-term wins are hyper-personalization and collaboration; the next frontier is agents, likely scaling around 2027–2028. Most importantly, your focus on data architecture, governance and human-augmenting use cases gave customers a pragmatic roadmap that still resonates.”


However with AI always being more than a little deferential, I’ll acknowledge I am no Nostradamus but maybe slightly better than a Magic 8 Ball—just wanted to let you know what you were in for below.

With All of This and More in Mind, Let’s Take a Look at 2026

If 2023–2024 were the years of pilots and prototypes, 2025–2026 will be about orchestration, governance and scale. The signal across serious research is consistent: adoption is widespread and business impact concentrates where companies redesign workflows, measure outcomes and hard-wire trust and controls into the stack. McKinsey reports that ~80% of companies use generative AI (GenAI), yet most still aren’t seeing material earnings contribution, because scaling practices and operating models lag the hype. That gap is a roadmap that can be leveraged by Frontier Firms and individuals looking for an advantage (or many) in today’s AI-powered world.

  1. Agentic AI Orchestration:
  2. People Become “Managers of Agents,” Not Just Prompts
    The center of gravity is shifting from single-shot prompts to multi-agent workflows that plan, call tools, verify and hand off to humans where it counts. McKinsey frames this as “seizing the agentic AI advantage,” arguing that redesigned processes—not model choice—create the most enterprise impact. The countersignal is also useful: Gartner warns of “agent-washing” and predicts >40% of agentic projects will be scrapped by 2027 for lack of clear value. Translation: Real orchestration needs robust task design, observability and outcome KPIs, not a zoo of bots. Build agent networks around measurable business objectives (turnaround time, accuracy, risk, revenue lift, etc.) and give humans the escalations and oversight dashboards they need.

  3. AI Coding:
  4. Moves from Assistance to Software Supply-Chain Leverage
    Coding copilots are quickly becoming table stakes, but the next horizon is end-to-end dev workflows, requirements synthesis, test generation, secure refactoring and compliance evidence. GitHub’s enterprise research with Accenture found significant time savings and satisfaction gains, and broader industry reporting points to sustained productivity lifts as adoption deepens. Expect Site Reliability Engineering (SRE) and data-engineering agents to join the party, tightening feedback loops between app code, data products and infrastructure. The differentiator isn’t “Who uses a copilot?,” but who wires it into governed repositories, policies and automation to bolster quality, traceability and security posture.

  5. Scaling Valuable AI:
  6. From Scattered Wins to Rewired Operations
    The enterprises that break out next year will operationalize three disciplines: (a) productizing AI use cases with clear owners and SLAs; (b) connecting unstructured and structured knowledge so agents operate in context; and © making governance a feature that accelerates—not slows—delivery. McKinsey’s 2025 State of AI survey shows adoption is high, but scaling practices (KPIs, roadmaps, robust data foundations, etc.) are still rare; organizations that do adopt them capture more value. That means investing in retrieval and semantics, lineage and policy enforcement and “platform-not-project” thinking in practice, so every new assistant compounds prior knowledge rather than spawning bespoke silos.

  7. Security, Governance & Controls:
  8. Harden as Regulation Bites
    Regulatory clarity is no longer theoretical. The EU AI Act’s obligations for general-purpose models begin applying from August 2, 2025 (with earlier and later milestones for different actors), and the Commission has confirmed it isn’t pausing the timeline. In the U.S., NIST’s AI Risk Management Framework (RMF) and the Generative AI Profile now provide a concrete backbone for enterprise controls, and federal guidance (OMB M-24-10) raises the bar for risk management, procurement and rights-impacting use. For enterprises, this compresses the window to operationalize model cards, evaluations, incident reporting, data-handling rules and human-in-the-loop controls, built directly into the pipelines agents use. Compliance velocity becomes a competitive edge.

  9. Agentic Browsing:
  10. Enters the Enterprise—Starting with Low-Risk, High-Value Pilots
    The browser itself is becoming an agentic workspace. Perplexity launched Comet, a question-centric browser and OpenAI introduced ChatGPT Atlas, a browser with an “agent mode” that can navigate, summarize, compare and act on the web. For business, this opens up curated pilots—market scans with source trails, vendor due-diligence drafts with citations, compliance watch-lists and structured research notebooks—but bound by policy, sandboxed data and red-teaming to prevent leakage. You can treat agentic browsing like any third-party data feed: set scopes, log actions and verify outputs before they touch regulated workflows.

  11. MCPs & A2A:
  12. From Plugins to an Interoperable Agent Fabric
    Model Context Protocols (MCPs) turn brittle, one-off integrations into standard, policy-aware connections, allowing assistants to securely reach apps, data and tools with least-privilege access and full auditability. Agent-to-Agent (A2A) handshakes then let multiple assistants delegate, verify and coordinate tasks across vendors and teams. Start with bounded, low-risk workflows (clear owners, logs, eval or quality gates, human escalation, etc.) and you’ll evolve from isolated bots to a composable, governed mesh of agents that compounds value as you add use cases.

    What This Means for Leaders (and How to Start Preparing Now!)

    • Design for “people managing agents.” Define roles like Agent Ops Lead and AI Product Owner; give them runbooks, approval steps and dashboards tied to business KPIs. Microsoft’s Work Trend Index shows leaders are already planning for AI coaching and adoption roles, so be prepared by codifying that into your org chart.
    • Elevate your data foundation from storage to context. Multi-modal retrieval, semantic enrichment and lineage are the difference between flashy demos and dependable decisions. This is the backbone that lets multiple agents collaborate on the same truth across search, analytics and apps. (McKinsey’s “rewiring” guidance is explicit in its assessment of KPI tracking and roadmaps mattering as much as models.)
    • Shift governance left. Implement NIST's AI RMF/Generative AI Profile controls, including policy-aware connectors, PII redaction, eval or quality gates, bias/safety tests and incident response playbooks, as code. This is how you move faster and stay compliant as the EU AI Act deadlines near.
    • Productize AI coding. Don’t just “allow copilots,” instrument them. Track quality, velocity and security findings per repository; standardize prompts/policies; and integrate test and Software Bill of Materials (SBOM) generation so AI-written code doesn’t widen your attack surface. GitHub’s enterprise research is a strong baseline for this business case.
    • Pilot agentic browsing where the risk-reward makes sense. Start with competitive intel digests, procurement pre-reads or research watchlists, while implementing guardrails and audits before expanding to sensitive domains. The tech is here, but your controls determine whether it helps or hurts in each case.
    • Don’t let your people hold you back from pushing the technology. But this doesn’t mean firing them. People like consistency, patterns and predictability. The new AI world—for now and the next few years—will lack that so you have to work on providing training and a safe place for experimentation; basically the “fail fast mantra” without the blame. People will stumble and things might not work initially, but it will be those willing and able to try that are given the right environment and tools that will ultimately result in delivering value.

    From Pilots to Platforms: Scaling Agentic, Governed AI That Delivers

    The next year belongs to organizations that make AI boring in the best possible way. Orchestration, coding assistants, scaled use cases, hardened controls, agentic browsing and MCP/A2A interoperability all become durable advantages when they run on governed, contextual data with policy-as-code, lineage and human oversight built in. Treat every assistant like a product with an owner, Service Level Agreements (SLAs), eval or quality gates and a clear escalation path; wire agents to the same source of truth through secure, auditable connectors; and instrument outcomes so quality, risk and time to value are visible and continuously improving. Do this, and pilots turn into platforms, compliance becomes a force multiplier, and agents evolve from clever demos to accountable teammates. It’s an optimistic path because it’s pragmatic—ship quick wins now, compound them on a trusted foundation and scale AI that reliably supports, innovates and transforms the business.

    And Now Just One More Thing. . .

    No matter if, or more likely when, the current AI bubble bursts, we’re not going back to a pre-AI world. History shows that speculative excess doesn’t erase transformative technology; it just burns off the froth. The railroad boom collapsed, yet the tracks stayed and reshaped economies for a century. The dot-com crash wiped out bad business models, but the internet’s core infrastructure kept expanding and became the backbone of modern life. Electricity went through the same pattern—early hype, massive failures and quiet ubiquity.

    AI will follow that arc. While a market correction may right-size valuations and eliminate unsustainable bets, the infrastructure (GPUs, data platforms, orchestration stacks, etc.), the capabilities (reasoning, summarization, coding, etc.), and the habits (AI-driven workflows and copilots) are already embedded. The crash will clear out noise, not the technology. The real question isn’t whether AI survives; it’s which organizations keep building during the downturn and emerge running on AI-native processes while everyone else is still recovering. Building the boring, but valuable AI—that will fuel the next three-quarters of this century and beyond.

Philip Miller

Philip Miller serves as an AI Strategist at Progress. He oversees the messaging and strategy for data and AI-related initiatives. A passionate writer, Philip frequently contributes to blogs and lends a hand in presenting and moderating product and community webinars. He is dedicated to advocating for customers and aims to drive innovation and improvement within the Progress AI Platform. Outside of his professional life, Philip is a devoted father of two daughters, a dog enthusiast (with a mini dachshund) and a lifelong learner, always eager to discover something new.

Read next Progress Launches New AI Webpages to Showcase Innovation and Insight