Build, protect and deploy apps across any platform and mobile device
Leverage a complete UI toolbox for web, mobile and desktop development
Automate UI, load and performance testing for web, desktop and mobile
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
Automate decision processes with a no-code business rules engine
Build mobile apps for iOS, Android and Windows Phone
Deploy automated machine learning to accurately predict machine failures with technology optimized for Industrial IoT.
Optimize data integration with high-performance connectivity
Connect to any cloud or on-premise data source using a standard interface
Build engaging multi-channel web and digital experiences with intuitive web content management
Databases, the safety deposit boxes of your business, require proactive management to keep them healthy and optimized.
Databases are the safety deposit boxes of your business. They store and disseminate the data that keeps your business running. Without them, growth would be nearly impossible. When working with this valuable cargo, it's important to apply certain measures to help ensure extended use and sound performance. Failure to monitor and properly maintain your databases often leads to data loss and even business failure. Ask two DBAs (database administrators) each for the five things that you need to do to maintain healthy databases and you'll probably get about 20 answers. Nevertheless, here are five things to do to keep your databases healthy.
"By failing to prepare, you are preparing to fail" - Benjamin Franklin
Over time, databases gets scattered (fragmented) which leads to sluggish performance. To verify the level of scatter, run a database analysis (DB) report to determine the scatter factor. A scatter factor rating of 1.0 is good, however, anything over 1.6 means the database is working harder than it should. An overworked database wears down quicker and costs you more because of poor performance. If for some reason you can't perform a dump and load, perform an index rebuild to defragment the indexes. An index rebuild is less effective than a dump and load, but it will help with performance.
I've had many customers experience a disaster and then find out that their backups don't work. For example, a customer recently called me in a state of panic: "We just had a fail and we accessed our backup and it's no good. The last good backup we have is from four months ago!" I figured out that they had some after image files that I could roll forward to recover the data. The recovery wasn't cheap, it wasn't easy, and it took four days to complete. It probably saved the company though. This is a prime example of why you need to test your backups regularly.
The wrong time to test your DR plan is after you have a disaster. This may sound obvious, but you'd be surprised how many companies create disaster recovery and business continuity plans and fail to test them. You may have a DR plan, but if you don't test it to see how long it takes to failover, it could be a surprise when it takes an entire day or longer. The options are simple; time to restore and run or data loss. This may be the time where you ask yourself, "Do we have a DR plan?" If you don't, you need one now. If you work for a publicly held company and you don't have a DR plan in place, you're in violation of the Sarbanes Oxley Act of 2002. The consequences could be significant. Ensure your DR plan is in place.
DBAs sometimes make the mistake of ignoring a database's growth spurts. When a file area runs out of space, it will crash and can cause corruption. This is a simple thing to check. If you use a MDBA (Managed Database Administration) service you won't run into this problem. Running out of space is a bad way to crash and is easily prevented because disk space is cheap today. By simply monitoring your databases, you can avoid this simple and inexcusable mishap.
The database can tell you exactly what is wrong. The DBA's job is to know what to look for. Items like buffers hits, database reads, index utilization, SQL queries and large files will give you a good picture of where you are in terms of performance. Users can also be helpful because they'll certainly let you know if the database is slow. When buffer hits are below 80%, you need a bigger buffer. If you watch your database reads and you see them triple over one month and they stay that high, there's clearly something wrong—and it's more than likely a code or index issue. When it comes to index utilization, anything below 60% needs to be rebuilt. Underutilization can cause queries to run ten times slower. All you have to do is compact the indexes—a relatively easy task.
Then there's the SQL queries. A good number of users run reports with SQL products. If SQL slows your database down, consider real-time replication (with Progress OpenEdge Pro2 ) so you won't risk the stability of your transactional database. Finally, there's large file enablement that should be done automatically every time you create a database. However, a lot of DBAs simply forget to do it. The problem is when a file exceeds the two-gig limit, it shuts down your database.
Proactive database monitoring is smarter and more cost effective than reactive database administration. An MDBA solution can manage your databases and keep you out of the weeds. Without MBDA, you need to stay on top of the database's health in order to extend its longevity. For now, databases need human intervention to work as designed and to perform optimally. In a few years, they might not.
Barbara is a Senior Marketing Manager of Global Professional Services at Progress.
Copyright © 2017 Progress Software Corporation and/or its subsidiaries or affiliates.
All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.