Availability

Sitefinity Cloud has been designed to ensure the best uptime, performance and scalability of your website. Utilizing various Microsoft Azure services and mechanisms like Load balancing, Autoscaling, Geo replication and Failover clusters, Sitefinity Cloud intelligently distributes your website visitors load to the available web server nodes, hosting your website and scales resources up or down depending on the current load. Moreover, utilization of failover clusters and geo-replication guarantees that failed deployments to the production environment can be safely reverted without affecting the live website performance.

Load balancing and Autoscaling

Thanks to the utilization of Load Balancing and autoscaling mechanisms, Sitefinity Cloud ensures that your websites can handle up to 1200 page views per second. 

All Sitefintiy Cloud instances have Multisite management enabled, letting you manage up to 1000 websites from your Sitefinity Cloud instance.

Deployment failure protection

Overview

The followign diagram describes the different stages your Sitefinity Cloud instance goes through during the deployment process. it also visualizes the mechanisms in place to ensure failover protection and continuous operation of the website in case of a failed deployment.

DeploymentSteps

Step-by-step details

A detailed description of each step is provided below.

In its initial state, a Sitefinity Cloud production instance is configured with a Production Slot, Deployment Slot, and a Failover slot. Users browse the website from the Production slot, while the Deployment slot is open to receive any deployment packages, promoted from the Staging environment via the Sitefintiy Cloud CD pipeline. Both the Production and Deployment slots are connected to a Primary database.

To ensure failover protection in case a deployment goes wrong, a Failover slot is designated for each instance, and is connected to a Secondary database - an exact copy of the Primary database. The two databases are kept in sync via Geo-replication. The following diagram demonstrates the initial state of a Production instance:

Initial State

When you start deployment to Production, the CD pipeline is engaged. Under the hood, the Failover slot is cloned. The duplicate Failover slot is connected to the Primary database. This way, seamless operation of the live website is guaranteed. Incoming requests can be immediately redirected to the Failover slot if at any point in the following deployment process something goes wrong. 

Clone Failover

Next, the Geo-replication between the Primary database, serving the website load, and the Secondary database is stopped. This way the Secondary DB stores a copy of the last working version of the website, prior to deployment, guaranteeing data integrity and ability to quickly restore website operations using that database.

NOTE: If anything goes wrong at this step, the Rollback procedure, described in Failed deployment later in this article automatically kicks in.

03 Remove GeoReplication

As a next step, the actual deployment is executed. The Deployment slot accepts the new package and is now running the latest version of the website code. The Sitefinity CMS architecture enables the Production slot to continue operating with the Primary database

NOTE: During this process, the Primary database might be upgraded due to the code changes deployed on the Deployment slot.

NOTE: If anything goes wrong at this step, the Rollback procedure, described in Failed deployment later in this article automatically kicks in.

04 Deploy new version

Once the new package has been successfully deployed to the Deployment slot, Production and Deployment slots are swapped. In a nutshell this means that the Deployment slot, which contains the newly deployed package, becomes the Production slot. At the same time, the Production slot, which contains the old version of the website code, becomes the new Deployment slot. This way, the new version of the website code will become available and start serving the visitor requests.

NOTE: If anything goes wrong at this step, the Rollback procedure, described in Failed deployment later in this article automatically kicks in.

05 Swap PROD and Deployment

At this point in the process, Sitefinity Cloud enables you to plug automated logic to verify whether the production application is healthy, and is able to serve customer requests. In case of successful deployment, the following steps are executed.

Successful deployment

First, the duplicate Failover slot is removed. It is no longer needed, as the application health check has passed successfully and is running with the new package.

Success 05 Remove failover

Next, the new version of the website code is deployed on the original Failover slot. It is connected to the Secondary database, which in turn gets updated according to the new website code. This step, and the ones following are done in preparation for next deployment - making sure that the Failover slot and the Secondary database are running the latest stable version of the website code.

Success 06 - Deploy new version on failover

Once the new version is deployed on the Failover slot, Geo-replication between the Primary and Secondary database is resumed. This is done to ensure that the Secondary database slot holds the exact copy of the Primary database, and can be used for failover purposes if needed, during the next deployment.

Success 07 - Establish GeoReplication

Finally, a successful deployment process ends with a setup identical to the initial instance state. The difference is that the Production and Failover slots are running the newly deployed website code, and the Primary and Secondary database are upgraded accordingly. The Deployment slot contains the old copy of the website code, which will be replaced during the next deployment.

Success- Final

Failed deployment

If Sitefinity Cloud detects that the application health is affected, for example the website cannot start after the deployment of the new package, a Rollback procedure is initiated.

First, Production and Failover slots are swapped. The Failover slot contains the last healthy version of the website code and is connected to the Secondary database, which holds the original, pre-deployment version of the website database. This way, website traffic is not affected by the failed deployment. Data integrity and uptime are guaranteed. The next steps will ensure the setup is returned in a state that enables future deployments.

Fail 06 - Swap PROD and Failover

Next, the ex-Production slot, which runs the non-healthy copy of the website code, and after the swap in the previous step is now designated as a Failover slot, is removed. The Deployment slot and the duplicate Failover slot continue running, and are connected to the Primary database, that has been affected by the non-healthy application code.

Fail 06 - Remove FailOver

Finally, the affected (broken) Primary database is removed from the setup. It is preserved for root cause analysis investigation purposes, but is no longer part of the production instance setup. Instead, a copy of the Secondary database, which is running the original, pre-deployment copy of the website data is restored. The Secondary database becomes the Primary database, and the newly restored copy is designated as a Secondary database, and is connected to the Failover and Deployment slots.

Geo-replication between the Primary and Secondary database is re-established to keep them in sync.

This is also the final step of the Rollback process. Once the next deployment is initiated, the Deployment slot will be connected to the Primary database, leaving the Failover slot connected to the Secondary database, identical to the initial state of the instance.

Fail 07 - Establish GeoReplication

Was this article helpful?