Welcome to the third post on AI for OpenEdge developers. The first and second posts explored the modern concepts of GenAI and how to think about GenAI from the OpenEdge developer point of view. The purpose and behavior of each component was discussed.
In this post, we will look at AI workflows, one or more steps in a chain that are managed by a large language model (LLM). Workflows support multiple step processes that are initiated by a single prompt. Real-world tasks require multiple steps as they involve external data, conditional paths, validation, error handling, and integration with other systems. A one step approach hits a limit when the problem gets even a little complex.
Moving to a workflow means you are no longer just having AI answer a question, instead you are defining a process with defined inputs, expected outputs, and clear integration points. For an OpenEdge developer, that framing should feel natural. You already know how to think about systems and process flows.
Introduction: What Is an AI Workflow?
AI workflows are a structured sequence of steps where AI runs one or more of them. You design the flow — the AI just executes its pieces. The workflow must be predictable, testable, and repeatable.
A big part of using AI well is deciding which problems are worth handing off to a model, how to structure those handoffs reliably, and how to integrate the results back into the tools used in your development and/or runtime systems. Remember, AI models are probabilistic. They don't always return what you expect — even when you've been explicit in your prompt. Robust validation at each step is the single most impactful thing you can do to make a workflow reliable. As discussed in earlier posts, AI is not the place to enforce business rules or perform transactional updates. Use AI to analyze existing data and accomplish tasks that are easy to execute and validate without changing data.
Translating AI capabilities into a reliable workflow takes planning. Each step in a workflow performs one well-defined action — classify this text, query these fields, generate this summary. You can feed it a known input and check that the output matches your expectations. Outputs should always be validated before passing them to the next step. While the model suggests actions, it remains under the control of the AI client which can watch for execution constraints, limits, and safety controls that remain under the application's control.
Workflows can execute in one of two ways: explicit ordering, or agentic execution where the model can decide the next action based on the result of the previous action. An explicit workflow predefines a strict ordering of tasks. An agentic workflow defines a set of allowed tasks (and tools) and lets the model choose among them at runtime — but the application still stays in control by enforcing guardrails such as allowed actions, maximum iterations, timeouts, output schemas, and safety checks.
Understand that steps in a workflow can return something wrong, off-format, or nonsensical. This is just the nature of probabilistic systems, and your workflow needs to handle it gracefully. Build retry logic with a maximum attempt count, API timeouts, and clear fallback behavior. If the task cannot provide a trusted result, it might be necessary to route it for human review. Define what "good enough" looks like and what should trigger an alert.
Production guardrails checklist | |
|---|---|
Goal | Make each step predictable, bounded, observable, and safe to operate in production. |
Checklist |
|
Types of AI Workflows
There are four core workflow patterns in AI. All of these can be built for explicit (non-agentic) and agentic workflows. In practice, most production workflows combine these patterns.
Sequential — Linear pipeline
The simplest and most common pattern. Each step runs in a predefined order, the output of one becomes the input of the next. If you've ever written a series of chained procedures in ABL or a multi-step batch process, this will feel familiar.
Example: A customer submits a complaint via a web form. Your OpenEdge AppServer receives it. The workflow defines three distinct actions. First, send the complaint text to AI, receive back a classification (billing, technical, general). Next, send the classified complaint to a second AI call that drafts a response template. And lastly, return the classification and response for further processing by the user.
Branching — Conditional routing
Two or more paths are defined and executed based on some predefined criteria. Instead of always following the same sequence, the workflow makes a decision and goes in one of several directions. That decision can be made by your code (a simple if/else based on the output) or by the LLM itself (asking the model to classify, evaluate, or route before continuing).
Example: A customer support workflow analyzes an incoming message and branches into one of three paths: answer automatically if it's a simple FAQ, escalate to a human if it's a complaint, or trigger a refund process if it's a billing issue. Branching is what gives a workflow intelligence — instead of treating every input the same way.
Looping — Repeat until a goal is met
Workflow logic repeats a set of tasks until a quality condition is satisfied, a task is complete, or a maximum iteration count is reached. Instead of calling the model once and stopping, the workflow runs in a loop — the LLM thinks about what to do next, takes an action, observes the result, and then thinks again. This repeats until the LLM decides the goal is complete. This is common in agentic workflows where the AI adapts at every pass — it can manage an unknown number of steps, change direction based on what it finds, and work through complex multi-step goals that you couldn't hardcode upfront. The key rule when building one is to always set a maximum number of iterations, because every pass costs tokens, and a loop with no exit condition is a runaway process waiting to happen.
Example: Your workflow asks the AI to generate an ABL code stub based on a natural language spec. After generation, a second model call reviews it against a checklist: Does it handle empty inputs? Does it use the correct table names? Does it follow your team's naming conventions? If not, the workflow loops — feeding the feedback back as context for a revised attempt. It stops when the review passes or after a defined number of tries is reached.
Parallel — Concurrent AI calls, merged results
Multiple AI calls run simultaneously instead of waiting for each other to finish. Instead of processing things one by one in sequence, you fan out several tasks simultaneously and then bring the results back together when they're all done. Parallel workflows are especially useful when tasks are independent of each other and order doesn't matter. The main challenge is to properly merge the results correctly once everything finishes. This is one way to keep latency manageable when a task can be decomposed into independent sub-tasks.
Example: You need to summarize five documents and instead of summarizing them one after another, you fire off all five LLM calls at once and collect the results together — cutting the total time to roughly the cost of a single call. Your workflow collects all five results and sends them to a final aggregation task that synthesizes the overall summary. What would take minutes sequentially takes seconds in parallel.
The Building Blocks of an AI Workflow
Every AI workflow is assembled using the same core components. As an OpenEdge developer, you'll recognize analogues to constructs you already work with every day.
Input — Where data comes from
Input is the trigger and the raw material for your workflow. In an OpenEdge context, this might be:
- An interactive user prompt
- A record change event in your OpenEdge database (a new order, a flagged account, an updated case). Note: always complete a transaction before running a workflow action.
- An inbound REST request hitting your application’s REST API
- A file dropped into a monitored directory — a PDF invoice, a CSV export, a scanned document
The source of your input determines how to design the rest of the workflow. Input coming from an interactive user has different latency requirements than input from an overnight batch.
Prompt — The instructions sent to the model
A prompt isn't just a question — it's the complete set of instructions, context, constraints, and format requirements you give the AI model. Think of it less like a function call and more like a detailed work order.
A prompt for an OpenEdge workflow might look like:
"You are a business analyst reviewing customer orders for an ERP system. Below is a JSON object representing a customer order. Extract the customer name, order total, line item count, and any items flagged as backordered. Return the result as a JSON object with these exact keys: customerName, orderTotal, lineItemCount, backorderedItems. If a field cannot be determined, return null for that key."
Notice the specificity. You're telling it what role to take, what data it's receiving, what to extract, and exactly what format to return it in. Vague prompts produce vague results — and in a production workflow, vague results cause downstream failures.
Model — The engine that processes and responds
The model is the workhorse of AI. Models run on a server. Different models have different strengths: some are better at understanding formats (text, images, PDFs), some excel at reasoning through complex logic, some are faster and cheaper but less capable. Choosing the right model is a real design decision. Even though the model is doing the work, the AI client is in charge of API keys, number of retries, logging, and managing model/provider differences.
Models can be extended with packaged behaviors that some platforms call skills (terminology varies by vendor). In practice, this usually means a reusable bundle of instructions (prompting), allowed tools/functions, and policies/guardrails for a specific job (for example: “summarize support cases,” “extract invoice fields,” or “review code against a checklist”). Treat these as reusable, versioned building blocks — and still validate outputs the same way you would for any other model call.
Output — What comes back
The output of a model is text — where "text" can be anything: a narrative summary, a structured JSON object, a yes/no classification, a block of generated ABL code, a numbered list of recommendations, an application query.
In a workflow, you always want structured output — typically JSON — so you can parse it and pass it into the next step or send it back to the caller. Most modern models can be instructed to return valid JSON reliably, but you should still treat the response as untrusted input: parse it strictly, validate it against the schema you expect, and retry (with a corrective prompt) if it comes back off format. Think of it like defining a ProDataSet schema before you populate it — you know the shape of what's coming back.
Powering an AI Workflow
Tools are external functions or APIs the model can call to take action or retrieve information (e.g., web search, code execution, database query). In this context, “tool” doesn’t mean IDE tooling — it means a callable operation you expose to the workflow (often a REST endpoint or function). A tool can be used by an LLM while a workflow is executing. You define which tools are available by providing a menu of actions and descriptions of what each tool can do. You write the tool’s functional code. The model decides when to call them and with what parameters, and your application validates the request and controls what actually executes.
Always execute a tool outside of your business logic. If you initiate a workflow while within a transaction, the latency can be unacceptable. It is best to run the tool asynchronously outside of the transaction. For example, if a database record change has an AI action to perform, do not do within the record change transaction. Instead leverage change events, like CDC, or send the request to a Kafka queue to be run asynchronously.
Tools can be used for:
- Getting Real-Time Data.The LLM's training has a cutoff date — tools fix that.
- get_stock_price() → live market data
- get_weather() → current conditions
- get_order_status() → your live database
- Taking actioninyour systems. Move beyond answers into actual work.
- create_ticket() → opens a support case
- send_notification() → alerts a user, sends an e-mail
- update_record() → writes back to your DB
- Grounding responses in your data. Stop the AI from guessing about your business.
- search_docs() → finds relevant KB articles
- lookup_customer() → gets real account details
- query_inventory() → checks actual stock levels
- Doing math & logic the LLM isn'treliable at.
- calculate_discount() → precise business rules
- run_query() → exact data, no hallucination
- validate_date_range() → deterministic logic
Tools transform an LLM from a simple text generator into something that can perform actual work — reading data, taking actions, and driving real outcomes. Tools are also what make AI responses trustworthy in production. Instead of the model guessing, it fetches, calculates, or acts on real, up-to-date information.
AI Frameworks
Commercial, open-source and low-code frameworks are available to help you build AI workflows. While you can build a workflow without a framework, frameworks manage conversation history, chaining steps, handling tool calls, retrying errors, and tracking context. Frameworks manage the boilerplate work so you can focus on what your workflow actually does.
There are three categories of AI frameworks:
Category 1 — Code-first frameworks. For developers who want full control. You write the logic; the framework manages the orchestration.
- LangChain — the most widely used, huge ecosystem, Python/JS
- Semantic Kernel — Microsoft's answer, built for C# and .NET
- LlamaIndex — optimized for data and document-heavy workflows
- AutoGen / CrewAI — multi-agent collaboration
Category 2 — Visual / Low-code builders. For prototyping fast or enabling non-developers.
- Flowise / Langflow — drag-and-drop workflow builders on top of LangChain
- Copilot Studio — build and customize your own Copilot agents
- n8n / Zapier AI — automation platforms with AI nodes built in
Category 3 — Managed cloud services. For enterprise teams who want scale without infrastructure.
- Azure AI Foundry — Microsoft's enterprise AI platform
- Amazon Bedrock Agents — AWS-native orchestration
- OpenAI Assistants API — hosted agents with built-in memory and tools
Often, framework selection depends on the AI development environment your company has selected. It also depends on the language you are most comfortable using, as workflows are not written in ABL.
AI Workflows and Orchestration
Orchestration is how you manage a workflow. A workflow is what runs. Orchestration is who's in charge of running it. AI Orchestration coordinates multiple workflows, models, agents, and tools working together to accomplish a goal. Every orchestration involves workflows, but not every workflow needs a formal orchestration layer. Start with a workflow — add orchestration when things get cowmplex.
Summary
This post discussed AI workflows and their importance. The workflows you design are what transform a capable model into a reliable, integrated part of your OpenEdge application. As an OpenEdge developer, you know how to design for failure, how to validate data, and how to build software that must work in production day after day. Those instincts don't go away when you add AI to the stack — they become more valuable.
If you're just starting your AI journey, resist the urge to build something ambitious. Start with two steps and one tool. Pick a real problem in your current application — something repetitive, text-heavy, and slightly ambiguous (classification, summarization, extraction). Get it working. Test it with real data. Understand exactly why it produces the results it does. Then, add a third step — branching.
The OpenEdge applications that will create the most value over the next several years won't be the ones that replace everything with AI. They'll be the ones where experienced developers — people who understand the business logic, the data, and the user needs — wired AI in precisely where it adds leverage. That's the opportunity in front of you.
Post four will walk through Real-World AI Workflows for OpenEdge developers and run-time applications.
Shelley Chase
Shelley is a Software Fellow at Progress Software, bringing 30+ years of OpenEdge experience. As a solution architect, she is dedicated to the quality, security and usability of the OpenEdge product set. As a thought leader, Shelley takes a holistic approach to product development, ensuring that every aspect of the OpenEdge product is meticulously crafted while balancing technical precision with a customer-centric outlook. Lately, she has been using AI to simplify her own tasks and bring AI solutions to the OpenEdge product set at both development and runtime.