If you build enterprise business applications for a living, you already know what “real software” looks like: deterministic logic, auditable transactions, predictable performance, and reliable data.
So, when you hear “AI will change everything”, it’s normal to be skeptical and a little nervous. This blog series is here to cut through the hype and give you a practical mental model for where AI fits in OpenEdge solutions—without ever handing your business logic (or your data integrity) over to a probabilistic black box.
This post is a practical introduction and safety-first mental model—not a deep dive into AI model architectures, but identify low-risk, high-value places to apply AI without compromising data integrity.
What is Artificial Intelligence (AI)?
Let’s start by saying that the term Artificial Intelligence is broad. The focus of Artificial Intelligence right now is the category called Generative AI (GenAI)—software that can read, write, summarize, and reason over language. This class of AI can summarize long documents, explain code, answer questions, draft content, and transform unstructured information into something more useful. Under the hood, these systems are powered by Large Language Models, or LLMs.
This post introduces some key AI concepts and fits them into how OpenEdge developers already think about systems. As a first step, let’s align on some key terminology used in AI systems.
Artificial Intelligence (AI)
A broad field of computing focused on building systems that can perform tasks that normally require human intelligence. These tasks include understanding language, recognizing patterns, learning from experience, solving problems, and making decisions. Instead of being explicitly programmed with step‑by‑step instructions for every situation, AI systems use data and algorithms to adapt their behavior based on examples. In practice, AI powers things like fraud detection, speech recognition, and predictive analytics, helping software systems make informed decisions or assist users more effectively, while still operating within defined goals and constraints set by humans.
Machine Learning (ML)
A subset of artificial intelligence that focuses on building systems that learn from data instead of being explicitly programmed with fixed rules. In machine learning, models are trained on examples so they can identify patterns and make predictions or decisions when presented with new data. Over time, as they are exposed to more data, these systems can improve their results without manual reprogramming. For example, a spam filter learns to identify junk mail based on thousands of emails identified as spam.
Generative Artificial Intelligence (GenAI)
A class of machine learning systems that can create new content—such as text, code, images, or documents—based on what it has learned from existing data. Instead of following fixed rules or just analyzing information, GenAI looks for patterns in large datasets and uses those patterns to produce realistic, human‑like outputs. When given a prompt, it predicts what output best fits the request based on probability, which means the results are often useful and coherent but not always perfectly accurate and requires human oversight to validate correctness and reliability.
Practically speaking, GenAI generates output by predicting what text (or code) is most likely to come next based on your prompt and its training data. That makes it great at language-heavy work—drafting, summarizing, explaining, and extracting key details—but it also means the output can be confidently wrong. Treat it as an assistant, and validate anything that matters.
Large Language Model (LLM)
A type of GenAI that is designed to understand and produce human‑like text. It is trained on large collections of written material—such as books, articles, documentation, and code—so it can learn how language is structured and how ideas are typically expressed. An LLM generates a response by predicting what words or tokens are most likely to come next based on the context, rather than retrieving a fixed answer. This allows LLMs to perform tasks like answering questions, summarizing documents, writing code, explaining concepts, and carrying on conversations. While LLMs are very good at producing fluent and relevant text, their outputs are based on patterns and probability, so they can sometimes produce confident but incorrect or incomplete information and should be used with appropriate validation in technical or business systems.
Prompt
The input you give to a large language model to guide what it should produce. It can be a question, an instruction, an example, or a combination of these, written in natural language. The prompt provides context and constraints, helping the model understand the task, the tone, and the level of detail you want in the response. Because an LLM generates output based on patterns and probabilities, small changes in a prompt—such as adding clarification, examples, or specific requirements—can significantly affect the quality and relevance of the result. In practice, prompts are how developers and users “program” GenAI systems, shaping their behavior without writing traditional code.
Why AI Is Becoming Relevant Now
AI isn’t new. We’ve all dealt with “smart” systems for years—think about chatbots and automated phone systems that can be very frustrating.
And no—enterprise software didn’t suddenly decide data integrity is optional, and developers didn’t become obsolete overnight.
What has changed is that modern models are finally useful in day-to-day work: they can summarize, classify, extract, and draft text with surprisingly high quality. They’re also very good at working with unstructured information at scale. For OpenEdge teams, that matters because so much effort lives in language-heavy tasks—reading legacy ABL, digging through logs, interpreting requirements, and keeping documentation current.
A Mental Model That Actually Holds Up
For OpenEdge developers, the most important thing to know is that modern AI systems do not actually understand your application. They don’t know your business rules or what data is authoritative—they generate responses based on learned language patterns.
Rule of thumb: let AI suggest and summarize; let OpenEdge validate, enforce rules, and commit. If a mistake would break data integrity, auditing, or transactions, AI shouldn’t be the decision-maker.
For OpenEdge developers, there are two distinct ways that AI can be used: development and runtime.
Development: use AI to boost productivity and improve code quality while you build and maintain OpenEdge systems—explaining legacy ABL, drafting documentation, scanning logs, generating test ideas, and tightening error handling and security patterns.
Runtime: use AI alongside your app to interpret unstructured input (questions, notes, emails) and help users read and understand authoritative results. OpenEdge should still validate inputs, enforce business rules, and perform all updates and commits deterministically.
AI can help interpret intent (for example, “show overdue invoices for Acme”), OpenEdge turns that into validated, parameterized inputs, runs the query deterministically, and then AI summarizes the results. Avoid patterns where AI invents filters, infers rules, or triggers updates/commits.
AI Relevance for OpenEdge
As mentioned earlier, AI is most useful in two distinct ways: development and runtime. During development, AI can act as a productivity booster for engineers by helping generate ABL code snippets, draft unit tests, create or update documentation, explain legacy procedures, and even suggest how existing business rules should be applied or refactored. Used this way, AI accelerates understanding and reduces manual effort, while developers remain fully in control of data integrity and design decisions. AI can also improve code quality by acting as a continuous, context‑aware reviewer and assistant throughout the development lifecycle rather than as a replacement for developer judgment.
At runtime, AI plays a different role: it can sit alongside OpenEdge applications to help access and interpret structured data more effectively — supporting natural‑language queries, summarizing query results, or retrieving relevant records based on intent rather than rigid filters. AI lives outside your core ABL logic, usually as an external service you call through PASOE or another integration layer. It can suggest, explain, or summarize, but OpenEdge still validates inputs, enforces business rules, and commits transactions.
Concrete examples for OpenEdge Developers
- Dev-time: explain a legacy ABL procedure, identify risky FIND FIRST patterns, suggest stronger error handling, or draft comments/docstrings for internal procedures and classes.
- Dev-time: generate a first-pass unit-test checklist from a spec (“what edge cases should we test?”) and map it to your existing harness/framework.
- Ops: summarize OpenEdge/PASOE logs (startup, agent, stack traces) into “probable cause + next checks” while linking back to the exact log lines used.
- Runtime (read-only): translate natural language into validated parameters (dates, customer IDs, status codes), then run the query deterministically—for example, “get the top 10 revenue customers for the past 12 months.”
- Runtime (user experience): summarize result sets returned by OpenEdge services into a short narrative (for example, “top 3 drivers of backorders this week”) without changing any underlying data.
In all cases, AI augments your work rather than replacing it: OpenEdge remains the source of truth for rules and data, while AI helps developers and users interact with that system more naturally and efficiently.
Summary
This post introduced the core GenAI terms and a safety-first way to think about using AI with OpenEdge: use AI for language-heavy assistance (explain, summarize, draft, extract), and keep OpenEdge responsible for validation, business rules, and transactions. In Post 2, we’ll take a deeper dive into the main components of GenAI and walk through practical applications you can adopt immediately—plus guardrails to keep your systems reliable.
Shelley Chase
Shelley Chase, a Software Fellow with Progress Software for over 20 years, takes a whole product view over the company’s core product, OpenEdge. Her technical skills and customer-driven focus drive the architectural direction of the product. Shelley is extremely talented in system architecture, object-oriented programming, and cloud deployment technologies. Her passion is to provide a well-architected product with an excellent user experience and works with engineers, product managers and services to guarantee success. Shelley has a patent for her work on “Alternate Presentation Types for Human Workflow Activities.”