Search This Blog

Thursday, 29 October 2015

Ideas to kick start innovative thinking

A school friend asked me last night about how to build a unicorn-scale business from a relatively small amount of capital. Essentially it's a question of innovation -- given the worst problem in your industry, or the area that most begs for disruption, how do you kick start your brain to think of better solutions? 

So far, I've come up with six thought triggers.
  • The etherealisation of devices. For example, 20 years ago, medical devices consisted of very expensive sensors with a bit of dumb logic to give a result. Today, it's a race to the bottom on dumb sensors with more and more sophisticated logic. What's the cheapest thing you could use to take a measurement?

  • The availability of motion and position sensors. How would it help if you knew exactly where data was entered? Or if you knew how your physical product moved in real-time?

  • Explore some of the demonstrations on http://www.alchemyapi.com/products/demo (now part of IBM Watson). In particular, look at their ability to do sentiment analysis -- to give you a summary of how people are feeling about topics. What could you do with a summary of everything your customers say about you?
  • Most people don't realise how good speech recognition has become as their only experience is Siri (which is very, very good) and they don't realise how general it can be. Try out a demo of Dragon Dictate, or use "Ok, Google" on an Android phone. What would you do differently if you could run your business on spoken information?
  • Get a feel for how conversational bots are working. Look at this demo of an agent that schedules meetings: https://meekan.com/hipchat/  . There have some competitors, such as Amy from http://x.ai/ . Do the interactions you have with your customers follow some formulaic approach? How would you do things differently if the initial engagement with your customers was handled by a bot?
  • There are generational differences in the way we run businesses (here's a blog post about a very interesting company that I did some work for: http://blog.ifost.org.au/2015/04/gen-y-businesses.html). What are the assumptions you make because you expect to interact via email?
Greg Baker (gregb@ifost.org.au) has worked with numerous startups -- http://www.ifost.org.au/startups/ for a short list -- as a developer, manager and consultant. He also runs mentoring sessions helping entrepreneurs wanting to bootstrap their businesses without venture capital or angel investments.

MS-SQL server not backing up

When Data Protector tries to backup a SQL server, the SQL agent contacts the cell manager to get the details of the integration (e.g. what username to use, whether to use SQL authentication or Windows authentication).

Today I was diagnosing an error message that I never would have expected to see on a MS-SQL server:

Cannot obtain Cell Manager host. Check the /etc/opt/omni/client/cell_server file and permissions of /etc/resolv.conf file.

I'm not exactly sure how I'm supposed to check /etc/resolv.conf on a Windows system. Maybe C:\Windows\system32\drivers\etc\resolv.conf?

Needless to say, in the agent's extreme confusion, it then followed this with:

Cannot initialize OB2BAR Services ([12:1602] Cannot access the Cell Manager system. (inet is not responding)
The Cell Manager host is not reachable or is not up and running
Not exactly that inet is not responding -- it didn't even know what cell manager to connect to.
We probably would have searched for hours trying to find the cause of it, but I proposed upgrading the client to match the cell server version (good practice anyway). The upgrade wouldn't proceed after we hit the following error message:
"Already part of another cell: cellmgr.ifost.org.au ." Note the extra space!
Staring very carefully at the following registry key, I confirmed that indeed, there was a space at the end of the name where it had been manually edited. I removed the space.
HKEY_LOCAL_MACHINE\SOFTWARE\Hewlett-Packard\OpenView\OmniBackII\Site\CellServer

Normally the "already part of another cell" error message means exactly what it says (because it won't match up with your cell manager's name); that there's a short name instead of a FQDN; or some sort of problem like that.

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector

Most extreme Ticket to Ride "short journey"

Over the holidays I was playing the very enjoyable board game: Ticket to Ride Europe.

My primary-school age son was creating the strangest combination of railways. He handed dozens of cards in his hand through most of the game, and we were vaguely wondering what he was doing and whether he had understood the game.

At the culmination of the game -- when everyone's required journeys are unfolded, he proudly announced that he had completed one of his short journeys (Barcelona to Brussels). When we asked him to show us, he walked us through surely the most arduous and inefficient way of doing it. First the passenger goes across Spain and south down the Italian peninsular. Then a ferry run takes them over to Athens. Then north from Athens into Kiev, and finally west from Kiev to Brussels via Denmark.

CentOS Data Protector agent unable to be installed

A customer asked me today for some help with a CentOS server that wouldn't install properly, despite everything looking OK.

The session went like this.

# ./omnisetup.sh -server cellmgr.ifost.org.au -install da,autodr
Cannot access the Cell Manager system (inet is not responding)....

As the cell manager was known to be working, we didn't need to check connectivity to 5555 on the cell manager. I suggested just running the installation (without the cell manager import first)

# ./omnisetup.sh -install da,autodr

This worked fine. Was the disk agent listening?

# netstat -an | grep 5555
tcp6   0   0   :::5555 :::*    LISTEN

That's odd: why IPv6? The cell manager had IPv6 disabled, so that would certainly have stopped things working.

# grep FLAGS /etc/xinet.d/omni
FLAGS = IPV6

That's that one explained... use your favourite editor (vi, nano, emacs, gedit...) to set FLAGS = IPV4 if you happen to encounter it. (Don't forget to run service xinetd restart )

But things still weren't working: CentOS has a host-based firewall. As we didn't have a media agent, the only relevant port is tcp 5555.

# firewall-cmd --add-port 5555/tcp --permanent
# firewall-cmd --reload

And then everything worked correctly.

Thanks to Glen Thompson for doing most of the work investigating this one!

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector

Tuesday, 27 October 2015

Vote for me and the robot overlord party!

Well, actually vote for my estimation robot.

Nobody enjoys figuring out how long it's going to take to program a new feature, or close a customer support ticket, or do a step in a project -- so let's had it over to the robots!

If you run Jira in the cloud, just click here to install it in your instance: https://marketplace.atlassian.com/plugins/au.org.ifost.aae

And if you think that it would be awesome to have better software estimates, and better predictions of how long projects and work will take, then vote!
http://devpost.com/software/automated-estimator-of-effort-and-duration

P.S. Devpost has all sorts of interesting and exciting competitions. It's worth signing up just to get a feel of where the future is headed.

Thursday, 22 October 2015

HP gives up against Amazon

So the HP public cloud is no more. I suspect I might have been one of the larger users of it (for a few weeks back in 2012) so let me try to give a serious analysis of what this means. (HP announcement link)

Amazon AWS is currently supply-constrained. They could lower prices to gain more customers, but then they wouldn't be able to service those customers. This is an unusual position to be in, as almost all of us are in industries where the bottleneck to growth is in acquisition, not delivery. So they ease their prices down little-bit-by-little-bit as they resolve their supply constraints.

Eventually, AWS will start to be demand-constrained, and that's when all hell breaks loose, because then AWS can start doing some serious price cutting. I'd peg it for early 2017 at a guess, when suddenly the price cuts start accelerating until the economics for renting from AWS starts to look competitive with buying a server and putting it on a desk unsupported, un-networked and unpowered.

Google and Azure can survive Amazongeddon -- they have the money and it's a market that they definitely want to be in. Google App Engine is still a very cost-effective offering -- my total compute and storage budget leading up to launch day (and including it) for the Automated Estimator of Effort and Duration for Jira was $0.22 -- so much for big data analysis being expensive! At that level, price comparisons are utterly meaningless, so if that's profitable now (which is probably is), they can keep doing it.

HP have presumably decided that they don't have enough time to build out a solid customer base on the HP public cloud before Amazongeddon. The HP cloud team is betting that customers will want HP software to manage their clouds, and that an HP-backed public cloud is not worth doing. Operations Orchestration makes sense in a cloudy world, for example.

But there is a problem, because for all the talk of "hybrid public-private clouds", either private is cheaper/better/more secure or public is cheaper/better/more secure.

  • If the answer is "private", then we will continue to have internal customer-owned datacentres, and HPE will continue to sell 3PARs, SureStores, Proliants and so on. 
  • If the answer is "public", then after Amazongeddon, HP won't have a hardware business that anyone cares about.

Unfortunately, I believe the answer is "public", as do many, many other people. To say that "private" clouds are cheaper / better and more secure the majority of the time means that not only are there no economies of scale in a big data centre, that there are diseconomies of scale that are going to appear any moment now from out of nowhere.

This puts HP in the same position as Unisys was in the 1980s-1990s. Customers stopped buying Unisys mainframes, so Unisys had to turn into a services, software and support business. They had a bit of an edge in government and defence at the time, and they worked hard to keep it. I know plenty of people who have had good careers at Unisys, and presumably it's a nice place to work where there is innovation happening. But Unisys in 2015 is not the hallowed place that it was after the Burroughs / Sperry merger.

Without that core of hardware sales on which to stack software sales, Unisys struggled. So too will HP. (And so will Dell, unless Dell decides to take on Amazon... which they could and should.)

I feel sorry for Bill Hilf though, as he has had to lead teams through the collapse of high-end Itanium hardware and now through the failure of the only viable hardware future that HP had.

That said, I'm optimistic about HP Data Protector in particular. There will still be important data to backup and archive. Storing it efficiently for fast recovery will always matter. You can't discard a backup solution until the last of your 7-year-old backups have expired.

I'm hoping that HP will now do three things:

  • Convert the HP cloud object storage device to something that works with S3. Since this feature will be irrelevant in January 2016 if they don't do this, it seems like a no-brainer in order to preserve the R&D investment done so far.
  • Interface into lifecycle management of S3 -- if the "location" of a piece of media is "Glacier", then Data Protector should be able to initiate its re-activation as step 1 of a restore job. Again, this seems a no-brainer if you already are dealing with S3.
  • I'd like to see the Virtual Storage Appliance delivered as an AMI (Amazon machine image). This isn't very difficult. Maybe there could be some fiddling around with licensing where the VSA reported its usage and customers paid by capacity per month, but even that's not really necessary.
If all this happens, then I suspect we'll continue to see HP selling Data Protector for another 30 years. If Data Protector is still useful for customers post-Amazongeddon as it is pre-Amazongeddon, then there would be no particular reason that Data Protector couldn't pass through this critical tipping point. In fact, since I doubt that BackupExec will handle the transition, Data Protector will probably pick up some market share.

Anyway, what are some immediate scenarios would this support?
  • Customer A has a small Amazon presence and a large data centre with a StoreOnce system and some tape drives. They would like to deploy a VSA in the same region as their Amazon servers and replicate their data through low-bandwidth links back to their data centre. 
  • Customer B has a somewhat larger Amazon presence. They have Data Protector in their office, and they want to backup their Amazon content to Glacier. 
  • Customer C is closing down their data centre in house and moving their servers into the cloud. They want to take backups of their servers in their data centre and use StoreOnce replication to get them into their cloud where the data is rehydrated.
So if you are customer like A, B or C, feel free to contact to your account manager, suggest that you'd really like Data Protector to support you and see how you go. (Or get in touch with me and I'll collate some answers back to the product team.)


Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (
http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector


Monday, 19 October 2015

Data Protector reporting with Navigator through a firewall

Data Protector has a number of built-in reports, which you can email, put on an intranet, pipe through some other command and various things like that. I wrote up the complete list of built-in reports here:
http://blog.ifost.org.au/2015/04/data-protector-built-in-reports.html

HP's strategic direction for reporting appears to be Backup Navigator. It is licensed purely on the capacity of the cells it is reporting on. (In other words: how big is a full backup of all the data being backed up: that's the capacity you license on.)

It produces some nice reports:








I was working with a customer who had had some problems with connectivity on Navigator 9.1.

Their cell managers had an omnirc file which limited the number of ports open for connections.

OB2PORTRANGESPEC=CRS:20495-20499

Having only five ports open was enough for them. We organised to have ports 20495 - 20499 opened from their Navigator server to their cell manager.

As it turns out, this is not enough. You need to have connectivity open from the cell manager back to the Navigator server as well. This isn't documented anywhere, and there's no error report from Navigator about this.

This problem goes away somewhat in Navigator 9.21 and 9.3 because you can do agent-based push. This is where you run a program on your cell manager which connects to Navigator on port 443 (HTTPS) and uploads the information that Navigator needs.

This solves the problem in two ways:

  • With the new agent model, there's no need to open anything from the Navigator server to the cell manager, so you can put the Navigator server in a quite isolated network.
  • It's entirely possible to run the Navigator server in the cloud (HPE offer a three month trial) and have your Data Protector reporting handled by a third party.

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector

Thursday, 1 October 2015

Are you having problems with estimates for projects and tasks?

I've developed a plug-in for Jira that uses machine learning to predict how long it will take for a ticket to be closed, and how much work effort will be required. Based on my current data, I'm getting 50% of tickets predicted correctly within a factor of 2, which is I think better than most human beings can do.




I'm looking for beta testers to confirm that it is all working as it should. If you are currently using Jira in the Atlassian Cloud (Agile, Service Desk or just vanilla) this will be a one-click add-on.
Contact me (gregb@ifost.org.au) if you are interested in trying it out.