Five Steps to Avoid Database Pitfalls

Five Steps to Avoid Database Pitfalls

June 20, 2016 0 Comments

Databases, the safety deposit boxes of your business, require proactive management to keep them healthy and optimized.

Databases are the safety deposit boxes of your business. They store and disseminate the data that keeps your business running. Without them, growth would be nearly impossible. When working with this valuable cargo, it's important to apply certain measures to help ensure extended use and sound performance. Failure to monitor and properly maintain your databases often leads to data loss and even business failure. Ask two DBAs (database administrators) each for the five things that you need to do to maintain healthy databases and you'll probably get about 20 answers. Nevertheless, here are five things to do to keep your databases healthy.

  1. Dump and Load the Database

    "By failing to prepare, you are preparing to fail" - Benjamin Franklin

    Over time, databases gets scattered (fragmented) which leads to sluggish performance. To verify the level of scatter, run a database analysis (DB) report to determine the scatter factor. A scatter factor rating of 1.0 is good, however, anything over 1.6 means the database is working harder than it should. An overworked database wears down quicker and costs you more because of poor performance. If for some reason you can't perform a dump and load, perform an index rebuild to defragment the indexes. An index rebuild is less effective than a dump and load, but it will help with performance.

  2. Test Backups and Restore to Another Machine

    I've had many customers experience a disaster and then find out that their backups don't work. For example, a customer recently called me in a state of panic: "We just had a fail and we accessed our backup and it's no good. The last good backup we have is from four months ago!" I figured out that they had some after image files that I could roll forward to recover the data. The recovery wasn't cheap, it wasn't easy, and it took four days to complete. It probably saved the company though. This is a prime example of why you need to test your backups regularly.

  3. Test Your DR (Disaster Recovery) Plan

    The wrong time to test your DR plan is after you have a disaster. This may sound obvious, but you'd be surprised how many companies create disaster recovery and business continuity plans and fail to test them. You may have a DR plan, but if you don't test it to see how long it takes to failover, it could be a surprise when it takes an entire day or longer. The options are simple; time to restore and run or data loss. This may be the time where you ask yourself, "Do we have a DR plan?" If you don't, you need one now. If you work for a publicly held company and you don't have a DR plan in place, you're in violation of the Sarbanes Oxley Act of 2002. The consequences could be significant. Ensure your DR plan is in place.

  4. Check the Database’s File Area Growth

    DBAs sometimes make the mistake of ignoring a database's growth spurts. When a file area runs out of space, it will crash and can cause corruption. This is a simple thing to check. If you use a MDBA (Managed Database Administration) service you won't run into this problem. Running out of space is a bad way to crash and is easily prevented because disk space is cheap today. By simply monitoring your databases, you can avoid this simple and inexcusable mishap.

  5. Don't Ignore Performance Indicators

    The database can tell you exactly what is wrong. The DBA's job is to know what to look for. Items like buffers hits, database reads, index utilization, SQL queries and large files will give you a good picture of where you are in terms of performance. Users can also be helpful because they'll certainly let you know if the database is slow. When buffer hits are below 80%, you need a bigger buffer. If you watch your database reads and you see them triple over one month and they stay that high, there's clearly something wrong—and it's more than likely a code or index issue. When it comes to index utilization, anything below 60% needs to be rebuilt. Underutilization can cause queries to run ten times slower. All you have to do is compact the indexes—a relatively easy task.

    Then there's the SQL queries. A good number of users run reports with SQL products. If SQL slows your database down, consider real-time replication (with Progress OpenEdge Pro2 ) so you won't risk the stability of your transactional database. Finally, there's large file enablement that should be done automatically every time you create a database. However, a lot of DBAs simply forget to do it. The problem is when a file exceeds the two-gig limit, it shuts down your database.


The Bottom Line

Proactive database monitoring is smarter and more cost effective than reactive database administration. An MDBA solution can manage your databases and keep you out of the weeds. Without MBDA, you need to stay on top of the database's health in order to extend its longevity. For now, databases need human intervention to work as designed and to perform optimally. In a few years, they might not.

Barbara Ware

Barbara Ware is Sr. Product Marketing Manager, responsible for positioning and messaging OpenEdge and OpenEdge Professional Services. She has 19+ years of experience in technology marketing leadership, strategy, content, communications and lead generation activities. You can find her on LinkedIn or on Twitter at @barbara_ware.

Comments are disabled in preview mode.
Latest Stories
in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

More From Progress
2020 Progress Data Connectivity Report
2020 Progress Data Connectivity Report
Read More
Getting Ahead of the Hybrid Data Curve
Read More
570x321__Top 7 Considerations Before Choosing a Chatbot for Your Enterprise
Top 7 Considerations Before Choosing a Chatbot for Your Enterprise
Read More