InMemory or Distributed output cache - making the right choice

Choosing the right output caching configuration for your Sitefinity CMS is an important architectural decision and can have an impact not only on your site’s performance and scalability, but also affect your costs. 

Advantages of running Sitefinity CMS with InMemory output cache 

By default, Sitefinity CMS uses the web server memory to store output cache items. This option provides the fastest possible speed for getting content from cache and delivering it to site visitors, thus delivering the best page response times. In regards to maintenance InMemory output cache storage requires no extra configuration or maintenance, as server memory is used, thus being an easier solution. Although it affects the server memory consumption footprint, InMemory output cache does not require maintaining additional cost for external cache storage.

Advantages of running Sitefinity CMS with Distributed output cache 

Using distributed cache has several advantages compared to storing cache items in-memory. These advantages come from the different mechanism for reading and writing cache items. With distributed cache only the first load balanced node, which serves a request for content that is not yet cached, needs to process the content and store it in distributed cache. All consecutive nodes in the load balanced setup fetch the item from the distributed cache. This way there is no longer a need for each node to create its own cache versions of the content. The centralized distributed cache storage results in:
  • Decreased CPU and Memory utilization on the web server nodes
    Only the node that needs to process the requested content for the first time will utilize CPU resources for that operation. All other nodes fetch the content from the distributed cache. No memory needs to be allocated on the web server nodes for output cache items as they are stored on the distributed cache storage.
  • Pages load faster for the first time
    After the first node has done the heavy lifting, subsequent nodes no longer need to compile or process pages, since a cached version of the page already exists in the distributed cache. The only time compilation and / or processing of the content happens is if the item does not yet exist or has been invalidated / expired from the distributed cache.
  • Cache availability after restart
    With distributed cache data is not lost when the worker process recycles.  Output cache data is stored externally outside of the IIS worker process and is still available after the Sitefinity CMS application restarts.
  • Reduced time to scale with an additional web server
    Bringing up an additional node does not require warmup, as it can serve items that are available it the distributed cache storage. Additionally, the cost of spinning up a new node is no longer linear as only the first node that gets hit with a request for non-cached content does the heavy lifting. All consecutive nodes fetch the already cached item.

The following table compares the advantagesof using in-memory or distributed cache:

Characteristic

In-memory cache

Distributed cache

Winner

Startup time (first node)

Web server node processes the content and stores an output cache item in its memory.

 

Web server node processes the content and stores an output cache item in the distributed cache.

 

Both

Startup time (subsequent nodes)

New web server nodes compile and processes the content and store an output cache item in their memory. Startup time is the same as first node.

 

New nodes get the item from the distributed cache.

Startup time is up to 5 times faster.

Distributed cache

CPU utilization

Each web server node uses CPU resources to processes the content and stores an output cache item.

 

Only the first web server node uses CPU resources to processes the content.

Distributed cache

Average response time (of a warmed-up site)

Fetching already cached content from the server memory is faster.

Fetching already cached content from the distributed cache storage depends on network latency.

In-memory cache

Memory consumption

Each web server node consumes memory to stores an output cache item.

 

No memory is used to store output cache items on the web server nodes. Distributed cache storage is used to store items

Distributed cache

Availability

Output cache items are persisted in the server memory and are not available after restart.

Output cache items remain on the distributed cache storage and remain available after web server node restarts.

Distributed cache

Maintenance

No extra maintenance. Server memory is used.

Distributed cache storage is an extra asset that needs to be maintained.

In-memory cache


Increase your Sitefinity skills by signing up for our free trainings. Get Sitefinity-certified at Progress Education Community to boost your credentials.

Web Security for Sitefinity Administrators

The free standalone Web Security lesson teaches administrators how to protect your websites and Sitefinity instance from external threats. Learn to configure HTTPS, SSL, allow lists for trusted sites, and cookie security, among others.

Foundations of Sitefinity ASP.NET Core Development

The free on-demand video course teaches developers how to use Sitefinity .NET Core and leverage its decoupled architecture and new way of coding against the platform.

Was this article helpful?