Create and deliver personalized experiences across digital properties at scale
Build engaging websites with intuitive web content management
Leverage a complete UI toolbox for web, mobile and desktop development
Build, protect and deploy apps across any platform and mobile device
Build mobile apps for iOS, Android and Windows Phone
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
Automate UI, load and performance testing for web, desktop and mobile
Optimize data integration with high-performance connectivity
Automate decision processes with a no-code business rules engine
Globally scale websites with innovative content management and infrastructure approaches
Content-focused web and mobile solution for empowering marketers
Faster, tailored mobile experiences for any device and data source
UX and app modernization to powerfully navigate today's digital landscape
Fuel agility with ever-ready applications, built in the cloud
Flexible deployment has always been a mainstay of Progress Corticon. We’ve now taken this further by making the Corticon Server available on DockerHub. This makes it easy for you to deploy your Corticon rules as services in Docker containers.
Docker is a technology that allows the deployment of applications, with all their dependencies, inside software containers. It’s about making it simple and easy to create, deploy and run applications using containers. Since the container packages up the applications with all the libraries and dependencies, the developer can rest assured that the application can run on any Docker host regardless of the environment.
In a way, Docker can be considered as a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system and only requires applications be shipped with things not already running on the host machine. It provides a resource isolated abstraction layer that allows independent containers to run on a single OS (host) instance. This avoids the overhead of starting and maintaining virtual machines.
Docker and other containers (LXC, libvirt, Zones, etc..) have similar resource isolation and allocation benefits as virtual machines, but the architectural approach allows them to be more portable and efficient.
The containers eliminate the need for a Guest OS. The containers run an application as an isolated process in the userspace and share an OS kernel with other containers. This allows the Docker containers to spin up and down in seconds, allowing you to scale and satisfy the current customer demand. When using VMs you tend to spin up instances fast and scale them down slow because of the VM startup latency.
Corticon Server can be deployed as a J2EE web application and the rules deployed to it accessed as a REST or a SOAP service. An image of the Corticon Server in Tomcat is now available on Docker Hub. The Corticon Server base image uses Java Runtime (JDK), Tomcat-7 (Tomcat image provided by the Apache’s Tomcat team) and the Corticon Server Web application (axis.war).
The Corticon Docker Hub repository has the latest version of Corticon as well as versions previously released. The versions are tagged such that you can pick a specific version of Corticon to use, the latest hotfix within a release, or the latest overall. This makes it easy to configure your Corticon deployments to use the version which best fits your needs. It is very easy to build and run a version of Corticon Server by selecting an image on Docker Hub.
To build your Docker image you just run docker build:
$ docker build –t corticon
Once you have the Corticon image in your local repository, you can deploy the image using a docker run command:
$ docker run –p 8080:8080 corticon
That’s all that is needed to build a Corticon Server image using the Docker Hub. You can refer to the Corticon Docker Hub account for more advanced settings and information.
Using the Corticon Server image from Docker Hub makes it very easy to setup a Corticon deployment. Using the latest tag you can also make sure that your Corticon deployments are running with the latest hotfix for a release.
Using the Docker for your Corticon deployments frees you from having to configure Corticon for different environments, increases flexibility, and potentially reduces the number of systems needed because of its small footprint and lower overhead.
Starting new Corticon images in Docker takes just a few seconds. This is due to the fact that it creates a container for every process and does not boot an OS. It is also trivial to switch between different versions of the Corticon Server image for testing purposes by merely building and running an image using a different tag (Corticon Server version). Different versions of a Corticon Server image can be created and destroyed without worrying that bringing it up again would be too costly.
Suvasri Mandal is a Sr. Software Engineer at Progress. She is responsible for design, development, testing and support of Corticon BRMS. She has background in the areas of Business Rules and Complex Event Processing.
Copyright © 2017, Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks or appropriate markings.