Create and deliver personalized experiences across digital properties at scale
Build engaging websites with intuitive web content management
Leverage a complete UI toolbox for web, mobile and desktop development
Build, protect and deploy apps across any platform and mobile device
Build mobile apps for iOS, Android and Windows Phone
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
Automate UI, load and performance testing for web, desktop and mobile
Host, deploy and scale Node.js, Java and .NET Core apps on premise or in the cloud
Optimize data integration with high-performance connectivity
Automate decision processes with a no-code business rules engine
Transform your businesses in order to survive in a completely digitized and connected world driven by software innovation.
Globally scale websites with innovative content management and infrastructure approaches
Content-focused web and mobile solution for empowering marketers
Faster, tailored mobile experiences for any device and data source
UX and app modernization to powerfully navigate today's digital landscape
Fuel agility with ever-ready applications, built in the cloud
In the database world, performance and reliability are paramount. Contention for shared resources and disk I/O are the biggest obstacles preventing well-written applications from running optimally. One of the many things the OpenEdge database does to reduce disk I/O is provide caching of database blocks in shared memory. Everyone knows accessing data from memory is faster than accessing data from disk; however, memory is a finite resource and most enterprise databases today can’t fit into one’s available physical memory. Thankfully, most applications frequently access only a fraction of the data stored in the database.
Given an appropriately sized buffer cache, the OpenEdge database does a very good job of anticipating the data needs of the running application through its management of the buffer cache. Although the default mechanisms work very well, there are a few things you can do to improve them further and achieve even higher throughput.
Performing large one-time data scans that access data outside the normal working set of the application can foul the default buffer replacement mechanism by populating the buffer cache with less frequently accessed data. An example is running an online dbanalysis or a query that performs table scans of large tables. Restricting which data is evicted from the buffer cache to make room for the new data can be managed through the use of private buffers. The number of private buffers reserved for a particular connection can be managed through the –Bp startup parameter or can be manipulated at runtime through the _MyConn-NumSeqBuffer field of the _myConnection VST. Using private buffers restricts the eviction of existing data in the buffer cache to only that data managed within the private buffers allocated for the current connection. Utilizing private buffers in these circumstances can improve overall throughput of the system by better managing the application’s working data set in the buffer cache.
The default buffer cache replacement mechanism can also be altered for specific objects by employing an Alternate Buffer Pool. Specific tables, indexes or LOBs can be assigned to the alternate buffer pool which is sized by the –B2 database startup parameter. The replacement policy is independent for each of the primary and alternate buffer pools. If the alternate buffer pool allocation is not exhausted, page replacement is not required and the housekeeping to manage the replacement mechanism is avoided, further improving performance. Additionally, the size of the primary and alternate buffer pools can be increased online using the increaseto option of proutil.
Newly available in OpenEdge 10.2b06 and OpenEdge 11.1 are the –lruskips and –lru2skips startup parameters which provide the ability to alter the heart of the buffer cache replacement mechanism. The default cache replacement mechanism uses a least recently used (LRU) algorithm to anticipate the working data set of the running application. Although this mechanism is quite good at its predictions, it suffers from high overhead and introduces contention which basically single threads all access to the specific buffer cache. The –lruskips parameter introduces frequency of access into the existing LRU replacement mechanism, dramatically reducing the associated housekeeping costs, improving data access concurrency while still being very good at anticipating the working data set of the running application. The parameter value can be changed at runtime through promon or by altering the _Startup-LRU-Skips VST field and is available for both the primary and alternate buffer pools. Using –lruskips requires no object level configuration as does the alternate buffer pool but lacks the ability to ensure an object is resident in the buffer cache.
These three features used in concert with each other provide the best data access yet for any application workload.
Further information on each of these mechanisms can be found in the OpenEdge Data Management: Database Administration guide.
View all posts from Banville on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Copyright © 2016, Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks or appropriate markings.