When people discuss how often they test their Disaster Recovery (DR) Programs, you get answers from all over the place. Some people test it once a year, just a little check up to see if systems are still functioning correctly, while others with more active IT infrastructures may test quarterly, without fail, like clockwork. Some, who maybe don’t have that many assets to lose and in effect simply aren’t as worried about it, test whenever they feel like it–in fact, there might be a few who think “man, I should do that,” and run a DR test after reading this article today. The truth is, though, that there’s no uniformly correct answer to the question “how often should you test your DR?”
This is because every business is different. DR testing costs money, productivity can go down, money for consulting fees may begin to stack up–and for some businesses, their assets are worth enough that every time a change to their IT infrastructure occurs (which could be quite often), they run a DR test. Others won’t. How often you test is going to differ from organization to organization.
Test disaster recovery more often
A Disaster Recovery Preparedness Benchmark Survey found that 23 percent of businesses don’t ever test their DR, while about 33 percent only tested once or twice a year. Whether it’s sheer laziness, inexperience, or inability to pony up the cash, this essentially means that almost 55 percent of organizations are hoping that nothing has gone awry that may impede DR plans if an actual disaster occurred.
Worse still, the survey found that out of the companies that actually do test their DR plans, about 65 percent of them don’t actually pass their own DR test.
Improve DR skills
The good news is that Benchmark scrutinized the very limited results from the A and B grade companies, and found three major best practices that all three follow:
- Set benchmarks. Set recovery time objectives (RTO) and recovery point objective (RPO) for critical applications. The purpose of RTO and RPO are to set the bar and make sure you’re hitting the mark, as well as to detail the processes by which these things will be accomplished. When companies have strong benchmarks that measure on a scale (as opposed to just a pass/fail) they are better equipped to beef up components and procedures that may be weak links, before they contribute to failure.
- Be detailed. The Benchmark surveys results showed that more than 60 percent of businesses that participated had a disaster recovery plan that was incomplete, requiring further documentation. Perhaps this is because certain aspects are seen by organizations as expendable, but more likely it’s a lack of knowledge or simple “blind spots”. You might be protecting hardware from physical disasters, for example, but have no way to recover your data from in the event of actual damage or a cyber attack. Benchmark suggests that you leave no stone unturned when documenting your DR, and to include plans for everything under the sun–this includes applications, documents and databases, and even your web properties.
- Testing plans more frequently. The results of the survey showed that, in conjunction with a complete and fully-detailed plan that includes solid benchmarks and measurements of recovery time and recovery point objectives, automated testing helped push A and B grade organizations to the top. With more on the line and new trends in cyber crime, constant testing is absolutely essentially when looking for new holes to patch without leaving yourself at the mercy of a “trial by fire.”
Exactly how often should a business test DR? Well, for about 73 percent of them on average, the answer is “a lot more.” Of course, there’s no such thing as a correct answer here–and there’s definitely a “too much,” based on time and money constraints–but by following the three practices outlined by the Benchmark survey above, including more frequent testing, you’ll definitely be more prepared to handle disaster than the average business.