Search This Blog

Tuesday, 3 March 2015

Suggestions and updates for Data Protector books - a big thank-you to Chris Bhatt

Chris contacted me today having found numerous typos and corrections. They will propagate through the system over the next few days: Kindle users should see these update silently, I can send an updated PDF if you ping me and future print copy orders through CreateSpace will be the corrected version. Contact me if you want an errata list.

Chris also pointed out a number of things that I should have put in, but didn't. So here goes:

  • When you are writing to a virtual tape library (VTL) that does any kind of de-duplication, set the concurrency to 1. The reason you want to do this is so that you get very close to the same stream of data from one backup to the next: this will maximise your de-duplication. But there are some exceptions:
    • If you are using mhvtl on a Linux machine to emulate a tape library, then it doesn't make any difference what you do with concurrency. It does compression but that's only for very short streams of symbols that have already been seen before in that session.
    • If you are using a StoreOnce box, then don't use the virtual tape library option. Use Catalyst stores (StoreOnce devices) instead, as they use space more efficiently, and keep track of what has expired (and free it up). If you are worried about performance and want to do this over fibrechannel, this is possible if you are on Data Protector 9.02 and a suitable firmware version (3.12, for example).
  • I should have written more on whether servers should each have their own backup specification or one backup specification containing multiple servers. I think I'll do a blog post on this. When I've written it, it will be
  • It's worth reminding everyone of Stewart McLeods StoreOnce best practices (  I had a customer just last week who lost their system.db file -- but they could easily have lost a store.db as well -- because of some disk corruption. I will ask try to expand out these and a few other suggestions in the next edition.
  • Another question that deserves an answer is "what's the best way to backup and restore files on a volume that has many thousands / millions of small files?". In version 7.x and before, this was a major bug bear because the database was so very slow that it could become the bottleneck. Often the only option was to turn off logging altogether on that backup object. Even today it's worth splitting it up into smaller objects (which I mention in chapter 6 -- look for Performance - Multiple readers in the index). I've also since realised that I never quite described exactly how the incremental algorithm works with multiple readers either.
What else should I add? What have I missed? What would have helped you when you started out?

Put any comments below (as blog comments, or on Google+) or email me ([email protected]) with your suggestions.

Greg Baker is one of the world's leading experts on HP Data Protector. His consulting services are at He has written numerous books (see on it, and on other topics. His other interests are startup management, applications of automated image and text analysis and niche software development.