Search This Blog

Tuesday, 27 May 2014

Data Protector 8.12 released

I've now deployed 8.12 for a customer (it came out on Friday).

It is available as a patch to 8.10 ( for up-to-date links) if you have a valid support contract.

Since then 8.13 has been released ( ) and the general recommendation is to go to version 9. Don't forget to buy my book on Data Protector.

Saturday, 24 May 2014

When debug logs aren't good enough

Normally, when you want to get more information about what is going on inside a HP Data Protector process, you simply turn on debugging: edit /opt/omni/.omnirc on Linux/Unix or C:\ProgramData\Omniback\omnirc on Windows and add a line:

OB2DBG=1-200 myproblemname.txt

All processes read this file when they start. You might want to be aware of OB2DBG_DIR so that you can direct the (massive) files somewhere with a lot of room. (For example, if you put your debug logs onto the same drive as your internal database, you might start getting internal database errors because you have run out of disk space.)

And generally, there's more than enough information in there to figure out what's going on.

But this week, I hit something a bit more obscure. I had a VEAgent backup (which was supposed to be backing up VMware) hang very early in the session. Even more bizarrely, it was hanging before it even initiated a connection to the VSphere server.

The debug logs were uninformative. I really need to know what it was hanging on.

Fortunately, almost all processes running on cell clients (i.e. almost everything that isn't part of the cell manager's infrastructure) are launched from either the Windows omniinet process or inetd or xinetd on Linux / Unix. The main exception is the StoreOnce processes, which start at boot time.

I was able to reproduce the hang on both the initial discovery (when you try to browse for a backup) and starting an actual backup, so I decided to focus on the initial discovery first, which is done by the vepa_util.exe process. (It's even got the .exe ending on Linux!)

So I move the vepa_util.exe process out of the way...

mv /opt/omni/lbin/vepa_util.exe /opt/omni/lbin/vepa_util.exe.bin
And then created the following replacement:

exec strace -f -ff -o /tmp/tracelog \ 
        /opt/omni/lbin/vepa_util.exe.bin $*
This script instead launches the original binary (with all command-line arguments) under strace, where each trace file gets the PID of the traced process appended to its filename.

Then I tried to start creating another backup. The GUI froze, as before, but this time I could look at the last line of the trace file to know which system call was blocking.
Reading from file descriptor 10. But what was file descriptor 10? A quick lsof -p command on the vepa_util.exe process ID turned up a socket connection back to the cell manager:

vepa_util.exe ....  TCP vmclient:52395->cellmanager:2315

And even more curiously, when I logged into the cell manager and looked at the socket there, there were bytes waiting to be sent: the send-queue (send-Q in netstat) wasn't empty.

It turned out to be a firewalling issue (which I'll try to blog about later), but there would have been no way of seeing that from the Data Protector debug logs.

Greg Baker is an independent consultant working on HP DataProtector, LiveVault and many other technologies. He is the author of the only published book on HP Data Protector ( See more at IFOST's DataProtector pages at

Wednesday, 14 May 2014

Keywords from emails on Google App Engine

Today I was working on a project which needed to have a summary of comments (especially emails) sent by the staff working on it.

The software runs on Google App Engine (because I wrote it), so it was surprisingly easy to add this courtesy of AlchemyAPI.

App Engine can receive emails, that's just a matter of adding to the app.yaml configuration.

- mail

- url: /_ah/mail/.+
  login: admin

Skipping most of the actual program, it was essentially this.

class EmailHandler(InboundMailHandler):

  def receive(self,message):
   (content_type,body) = message.bodies().next()
   a = alchemyapi.AlchemyAPI()
   data = a.keywords('html',body.decode(),{'sentiment':1})
   for k in data['keywords']:
      # store keywords and sentiment analysis 
That extracts the body out of the email, calls out to the Alchemy API, and returns the keywords from the email including the sentiment of the words around it (whether the word is positive or negative).

A bit of JavaScript had it displaying the keywords in colour. I sent the following email:

From: [email protected]To: [email protected]Subject: Site report 
They had some lovely stone gargoyles and some horrible fountains.

And out came some coloured summary keywords: stone gargoyles, fountains.

Greg Baker ([email protected]) is an independent consultant. If you are an industry leader and you need someone to catch your vision and help you make it reality, Greg might well be the right person to call.

Tuesday, 13 May 2014

Backing up a single server

Last week I was talking to a reseller (which is not surprising, almost all my clients are resellers or channel partners of some kind) who was asking about cost-effective options he could resell to backup a single stand-alone server for one of his clients.

Obviously, there are the built-in programs and numerous free programs, but my quick grab-bag of reseller-friendly options:

  • They have outstanding support, and only use open tools. They support all versions of Unix and Linux. You don't end up locked into anything complicated. They are the highest-priced, but offer the best reseller discounts particularly at high volumes.
  • LiveVault. This is HP's monthly-fee backup to the cloud. While the pricing looks high, it's a price for seven years of storage. And again, like the discount is based on the total across all your customers, so you can either give unbeatable discounts, or get extra margin.
  • DataProtector Single Server edition. This is the same as the enterprise version of HP Data Protector, but licensed only for a single server to write to tape. This is not the same as DataProtector Express, which was a tiny free product that used to come with HP tape drives. (Single Server edition is product number B7030BA which you can buy at
Quite often though, customers wanting a higher level of assurance around their long-term backups might well be advised also to investigate:
  • Using Google Apps + Spanning backup as this removes a huge number of localised physical threats.
  • Storing everything (including business documents) into subversion. While software developers prefer git, auditors prefer subversion and non-technical people find it easier. This automatically meets a lot of the ISO9000 documentation requirements, allows simple retrieval back to older versions, and can be hosted (e.g. by or by

Greg Baker is an independent consultant working on HP DataProtector, LiveVault and many other technologies. He is the author of the only published book on HP Data Protector ( See more at IFOST's DataProtector pages at

Friday, 9 May 2014

One of the many ways that an internal database backup can fail

If you have a Windows-based cell manager for your Data Protector backups, and you change the password for the account that it runs as, the from then on your IDB (internal database) backups will suddenly start to fail.

It will look like this in the session report:

[Normal] From: [email protected] "IDB"  Time: 9/05/2014 9:45:59 AM
Backup session 2014/05/09-256 started.

[Normal] From: "IDB"  Time: 9/05/2014 9:45:59 AM
OB2BAR application on "" successfully started.

[Normal] From: "DPIDB"  Time: 9/05/2014 9:45:59 AM
Checking the Internal Database consistency

[Normal] From: "DPIDB"  Time: 9/05/2014 9:46:01 AM
Check of the Internal Database consistency succeeded

[Normal] From: "DPIDB"  Time: 9/05/2014 9:46:01 AM
Putting the Internal database into the backup mode finished

[Critical] From: "DPIDB"  Time: 9/05/2014 9:46:01 AM
Putting the Internal Database into the backup mode failed

[Normal] From: "IDB"  Time: 9/05/2014 9:46:38 AM
OB2BAR application on "" disconnected.

[Critical] From: "IDB"  Time: 9/05/2014 9:46:38 AM
None of the Disk Agents completed successfully.

Session has failed.

If you turn on debugging (by putting "OB2DBG=1-500 idb-backup.txt" into the omnirc file) you'll see copious messages in the debug log. Digging through them you'll see the following:

[ 10] Inet configurations for user [email protected]' does exist.
[ 55] Data from registry for user [email protected] successfully read.
[ 10] Data prepared.
[ 10] User logon failed with [1326] The user name or password is incorrect. 


psql: FATAL:  SSPI authentication failed for user "hpdp"

The reason is of course that the backup is initiated by the Inet process (which runs as Local System) and it is trying to switch user to DP_SVC because that's the account everything else runs as. It requires credentials to do this, and it won't have the right credentials.

(Solution: Start the DP GUI, and choose "Clients". Right click on the cell manager, delete the existing impersonation and add a new one. Note that you might have installed DP with another account name instead of what I've done here. You can tell by looking at what user the other DP services run as.)

Greg Baker is an independent consultant working on HP DataProtector, LiveVault and many other technologies. His the author of the only published book on HP Data Protector ( See more at IFOST's DataProtector pages at 

How to kill a hanging session

If something goes badly wrong in a backup session (e.g. a loss of communications, or a process dying unexpectedly) you can end up with the BSM process (bsm.exe on Windows and /opt/omni/lbin/bsm on Linux/Unix) still running even though it doesn't achieve anything. Aborting doesn't help because the session manager process (BSM) tries to send that abort to the disk agent and the media agent, which won't work if they don't exist any more.

All you need to do is kill the relevant BSM process.

Which is easy when there is only one, and hard when you are in the middle of hundreds of backup sessions.

Sometimes it is easy, because you can look at the start time of the process and deduce which session it is.

But here's a way of getting it that's a bit more scientific.

Get the full command-line of the process in question, either with "ps -ef" on Linux / Unix or by fiddling around with the viewed columns in task manager on Windows.

You should see something like:
  bsm -session_key 17 -owner .....

That session key (17 in this case) is the initial data to distinguish one session from another. If your database is incredibly slow, you might even see that in the session monitor for a while. But generally it is so quick to allocate a real session number that the session key is invisible.

That doesn't stop you using the session key though as an argument to any commands; you just have to prefix it with "R-".

For example:
  omnistat -session R-17 -status_only

This will produce output like this:

Session ID    Type    Status     User
2014/05/09-2  Backup  Progress   [email protected]

That's the session ID I was looking for, so I know which bsm process to kill now.

[Addendum. I'm running a survey to see if there's interest in getting omniabort modified to support this capability: ]

Greg Baker is an independent consultant who happens to do a lot of work on HPE DataProtector. He is the author of the only published books on HP Data Protector ( He works with HPE and HPE partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at, or visit the online store for Data Protector products, licenses and renewals at 

Thursday, 8 May 2014

DataProtector undocumented features -- running a script against the database

In my (in)famous write-up of how to un-break the DataProtector IDB after an upgrade ( if you haven't seen it already), I gave a convoluted procedure for getting SQL-level access to the database.

I've since discovered that the password is simply Base64 encoded in the $OMNICONF/config/server/ID/idb.config file.

But today I discovered an undocumented omnidbutil feature:
omnidbutil -run_script sqlcommands.sql  -detail
Just create commands that you want to run in a text file. If you are on Windows, make sure the SQL file is saved in ASCII format rather than Unicode.

Much easier and I'm going to put that into the next edition of my Data Protector book.

Greg Baker is an independent consultant working on HP DataProtector, LiveVault and many other technologies. See more at IFOST's DataProtector pages at

Monday, 5 May 2014

DataProtector + LiveVault

HP have three backup products: DataProtector, LiveVault and Connected. LiveVault and DataProtector overlap somewhat because they are both about backing up servers.

LiveVault is designed to be a cloud-enabled backup, with an on-site in-house cache as an extra. It has no option for backing up to tape.

HP DataProtector is designed for in-house backup to tape or to a deduplication store. You can set up a deduplication store on any cloud-hosted server and do very low bandwidth replication to it.

LiveVault can backup Windows or Linux, and it has integrations with SQL, Exchange, VMware and Hyper-V. HP DataProtector has all of these too, plus other integrations.

DataProtector can control LiveVault jobs too. But it's not obvious when it makes sense to use the LiveVault integration to do a backup instead of DataProtector.

Looking at the Australian pricing, here are the scenarios where a DataProtector customer will do better with LiveVault than with DataProtector.

  1. You have some small branch offices or cloud-hosted servers with less than 25GB to backup and you don't have an Advanced Backup to Disk license for DataProtector. The smallest Advanced Backup to Disk licenses is for 1TB and even though the per-GB costs are lower they aren't 40 times cheaper.
  2. You have some cloud-hosted servers in the HP Public Cloud or Amazon AWS and you don't want to use the network-to-network VPN options that Amazon and HPPC offer. You might even have servers in a DMZ where you can't open up port 9387 and 9388 to do a StoreOnce backup. It makes sense to use LiveVault because you will have better bandwidth from the LiveVault servers to your cloud servers than you would have from an in-house data centre.
  3. You have a large number of small SQL server databases spread out over lots of computers. With DataProtector you would be paying for an integration license for each SQL server; with LiveVault you only pay for the volume of data.
  4. You have a large numbers of small MS-Exchange, VMware or Hyper-V servers. It's the same situation as for SQL server, but I don't think I've ever seen any site where this any of these are small enough.
  5. A pair of small TurboRestore appliances spread across two data-centres can sometimes work out more cost-effective than StoreOnce software stores, but it only seems to work out for a few particular sizes. As far as I can tell it only works out for almost precisely 8TB or 12TB.

What else are you using the LiveVault integration for?

Greg Baker is an independent consultant working on HP DataProtector, LiveVault and many other technologies. See more at IFOST's DataProtector pages