The Evolution of Application Deployment Part Two: Containers and Orchestration

The Evolution of Application Deployment Part Two: Containers and Orchestration

Posted on June 17, 2019 0 Comments
The Evolution of Application Deployment Part Two Containers and Orchestration_870x450

In part one of this two-part blog, I provided a 30,000-foot view of how application deployment using the cloud can greatly benefit an enterprise’s software rollout strategy. As we wrap up our discussion we’re going to delve into containers and orchestration.

Why Containers?

In business, the “do more with less” philosophy applies to every aspect of an organization, and containers fit that well-worn model nicely. A container’s design and function provide a minimalist approach to application deployment. Containers, unlike virtual machines (VMs), do not require an OS per each application, allowing them to operate with less memory CPU and storage. This streamlined approach means admins can deploy two to three times more containers on a server than VMs. And that’s not even accounting for reduced number of licenses for the operating system.

Containers also offer an application infrastructure that is more dynamic and elastic. Consider how starting each individual container does not involve an operating system booting up (including reading large volumes of executable code and data from storage and loading them into memory). Another advantage to containers is that they’re portable. So, if you need to move containers from one cloud host to another; simply download the container images to a new server.

Orchestration (ȯr-kə-ˈstrā-shən)

What comes to mind when you think of an orchestration? According to Merriam - Webster’s dictionary, orchestration is defined as “the arrangement of a musical composition for performance by an orchestra” When you think of an orchestra perhaps you think of violins and cellos and other classical arrangements, or maybe the conductor waving their baton about in a frenzied manner. Think about the vast number of musicians with their wide assortment of instruments playing together. A modern full-size symphony orchestra consists of roughly 100 musicians in various groups beautifully playing different parts of the composition. That’s a lot of people to get on the same page.

In the computing world, orchestration is defined as the “automated configuration, coordination, and management of computer systems and software.” Orchestration, used with containers, takes application deployment to a whole new level. Orchestration is used to dynamically allocate and free computing resources to support running applications and platforms. It is possible to define complex rules governing orchestration, to ensure sufficient levels of availability and performance.

Software architects use orchestration to improve the resiliency, scalability and performance of large scale applications. For example, a large enterprise solution can be arranged into components onto multiple servers; e.g., the database on one server, load balancers on another server, and the applications on another. Components can then be packed into clusters for greater scalability which allows for greater elasticity to respond to fluctuating load changes.

Consider how orchestration can help not only support the production load of mission critical, always-on applications, but also the productivity and collaboration of large development teams whose computing demand for compilation, build, testing and security scanning can vary throughout the day. No longer it is necessary to guess and provision in advance fixed capacity to enable optimal flow of development work. With orchestration developers can get as many computational resources as they need, when they need them, while the IT organization does not need to pay for unutilized computing resources. Skillfully designed IT platform orchestration affords an enterprise the ability to employ a Continuous Integration and Continuous Delivery (CI/CD) strategy to ensure a steady flow of on time delivery of new features and capabilities.

36AC (36 Years After Commodore)

Application deployment has changed plenty since the Commodore 64 crashed the party. Since that time, we as tech hungry consumers have allowed (and embraced) technology to infiltrate every aspect of our professional and personal lives. To keep pace, forward thinking CIOs must utilize the best technologies and practices at their disposal to deliver their products and services in the most cost effective and growth-oriented way. At this moment in time, cloud computing, containers and orchestration strike the right tune of agility and efficiency.

All of this is continuously and carefully considered by the product team behind OpenEdge 12.0 which will help you confidently evolve your applications for the cloud, providing exceptional power, availability and tooling to improve time to market, increase productivity and performance, and lower overall costs.

Check out everything that’s new in OpenEdge 12.

See What's New

Oleg Kupershmidt

Oleg Kupershmidt

As a Director of Product Management Oleg manages the OpenEdge platform. Prior to joining Progress in 2017, Oleg served in senior Product Management roles at Ipswitch, CA Technologies, Vivox and Nokia, and accumulated extensive experience in leading product teams and managing mission critical enterprise software solutions.

Comments

Comments are disabled in preview mode.
Topics

Sitefinity Training and Certification Now Available.

Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.

Learn More
Latest Stories
in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation