Search This Blog

Wednesday 23 December 2015

Please write a review

To everyone who downloaded a copy of any of my Data Protector books last week when I put them on special -- thank you! It was a very nice birthday surprise to take out #3, #4 and #5 positions simultaneously in Amazon's Business Software books.

If you found any of those books helpful, don't forget to leave a review: even just leaving a number-of-stars rating helps me work out what to focus on.

And if you missed out, you can buy them (they aren't very expensive even when they aren't on special) at http://www.ifost.org.au/books

Tuesday 22 December 2015

Data Protector media agent licenses

There are two licensing models:

  • Classic licensing (which is far more common)
  • Capacity-based licensing
If you are using the newer capacity-based licensing, then you can use any HPE Data Protector functionality that you want. If you are using Classic licensing (which is what almost every customer has), then you pay individually for different components.
For example, you pay a license for each tape drive you want to have concurrently writing. When you buy the cell manager license, you get the right to run one tape drive; if you want more you need to buy additional media agent licenses.
Otherwise, you will get error messages like this:
[61:17102] Not enough licenses "Direct attached tape drive for Windows / NetWare / Linux".

Or like this:

[61:17102]  Not enough licenses "Tape drive for SAN / all platforms". Session is waiting for some of devices to get free.

If you are encountering this, you can get a super-quick quote on licensing at this online store.

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector, or visit the online store for Data Protector products, licenses and renewals athttp://store.data-protector.net/ 

Friday 18 December 2015

Happy Birthday to me

I wanted to spread some birthday cheer before the Christmas cheer kicks in. That's the trouble with a birthday the week before Christmas. So I've fiddled around on Amazon so that it is running a special price on When Medusa went on Chatroulette today.

Use the opportunity to read something funny and uplifting, or buy it as a last-minute present for that special geek in your life.

I'll donate todays earnings to whatever charity gets talked about in the comments.

Wednesday 16 December 2015

Data Protector licenses available for purchase online

In what I think is a first for HPE -- certainly in Australia! -- you don't need to go through a traditional sales channel to buy additional Data Protector licenses any more.

All prices are in AUD -- convert them into your currency to see how cost effective buying through http://store.data-protector.net/ is.

Monday 14 December 2015

Where to find the next evil mastermind

Today's silly poem was the result of watching James Bond and reading about startups too close together.

http://www.ifost.org.au/~gregb/poetry/valley.html

I write nerd-geek poetry, poems that are completely unsuitable for anyone with a liberal arts major, but also guaranteed to bring a smile (or a laugh) to anyone who is into sci-tech. Most of my poetry has been published in a recently-released (and very affordable) book: When Medusa Went on Chatroulette. They are also available at http://www.ifost.org.au/~gregb/poetry

Thursday 10 December 2015

Celebratory 100th blog post - what people have actually been reading, and what freebies and junkets are on offer

This is the 100th blog post, and the counter of page views is about to tick over 50,000. Thank you for your readership!

According to Google's infallible stats counters, here's what most people have been reading on this blog:

POEMS At the end of a day of being a computer nerd, you need something that will make you laugh (or at least smile) and also make you look cultured among your friends. Nobody else writes poetry about nuclear physics or time travel, so if you want to get that "I'm so hip" feel, you really should buy a copy of When Medusa went on Chatroulette for $3 (more or less, depending on your country of origin).

NAVIGATOR - If you run Data Protector then you will definitely get some value out of the cloud-hosted Navigator trial. You can get reports like "what virtual machines are not getting backed up?" and "how big will my backups be next year?" -- stuff that makes you look like the storage genius guru (which you probably are anyway, but this just makes it easier to prove it).

HOW LONG WILL IT TAKE - If you are tracking your work in Atlassian's fabulous JIRA task tracking system, then try out my free plug-in (x.ifost.org.au/aeed) which can predict how long tasks will take to complete. And if you are not using JIRA, then convince everyone to throw out whatever you are using and switch to JIRA because it's an order of magnitude cheaper, and also easier to support.

TRAINING COURSES - You can now buy training online from store.data-protector.net -- and it appears that it's 10-20% cheaper than buying from HPE directly in most countries. There are options for instructor-led, self-paced, over-the-internet and e-learning modules.

SUPPORT CONTRACTS -  Just email your support contract before its renewal to gregb@ifost.org.au and I'll look at it and figure out a way to make it cheaper for you.

BOOKS - If you are just learning Data Protector, then buy one of my books on Data Protector (available in Kindle, PDF and hardback). They are all under $10; you can hide them in an expense report and no-one will ever know.

Wednesday 9 December 2015

Stumped by a customer question today: when to replace a cleaning tape

It was an innocent enough question: "when will I need to replace this cleaning tape?"


And I realised that not only did I not know the answer, that in fact, I'd never replaced a cleaning tape in 20+ years of backup work. Sure, I've put one in when I've been deploying a system, but I tend to forget about it after that.

Estimates from drive manufacturers suggest that a tape should be cleaned every month. Looking at backup logs, it looks like tape drives request cleaning about every 6 months.

But that data is mostly from tape drives that are inside a tape library, so the amount of dust getting in and out will be less than for a standalone tape drive.

The spec sheet on HPE's universal LTO ultrium cleaning kit suggests that it should be good for between 15 and 50 cleans.

Put together, that means that a cleaning tape should be replaced somewhere between once every year or so and every quarter century, which is not very helpful!

I believe the data from the tape drives themselves reporting "I'm dirty" rather than the vendor suggestions, so even taking the low end of the HPE spec sheet, a cleaning tape in a tape library should be good for 7 years. Since that's enough for at least two generations of tape technology to come and go, it's probably safe to assume that you will have bought a new tape library in that time.

But if you have multiple tape drives, and it has been a couple of years since you last replaced the cleaning tape, errm, maybe it's worth buying one. I'm not selling tapes at store.data-protector.net yet, so my best suggestion is this vendor on Amazon: LTO ultrium cleaning kit.

Incidentally, if you do have a tape library and you are running HPE Data Protector, then you will almost definitely want to sign up for the free cloud-hosted Backup Navigator trial here: Free Backup Navigator Trial at HPE so that you can see which are your most unreliable tape drives -- perhaps they need cleaning!

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector, or visit the online store for Data Protector products, licenses and renewals athttp://store.data-protector.net/ 

Sunday 6 December 2015

Data Protector reporting - the free hosted trial - sessions by session status

What proportion of your backups are completing successfully?
Report-of-the-day from Backup Navigator: what percentage of your backups are completing successfully?

HPE are running a free offer at the moment that you really, really want to take up. If you are running Data Protector, and you would like to get some insight into your backup environment, then Backup Navigator is the product that you want to buy.

Navigator generates reports -- beautiful reports -- that tell you everything from the basic and simple, through to the utterly awesome.

Backup Navigator is designed to run either in your own data centre (where it will use the Data Protector protocols to talk to your cell manager) or it can be hosted elsewhere (and you install a small agent that talks to the Navigator server via HTTPS).

To show customers how amazing it is, HPE are offering a three month trial. HPE already have the Navigator server in place. You just need to install the Navigator agent on a Windows or Linux box in your cell somewhere.  (It doesn't even have to be the cell manager, so you probably don't even need to raise a change control for it.)

No credit card or purchase order or commitment. HPE appear to be pretty confident that you'll like what you get! Sign up for it here: http://engage.hpe.com/Reg_NavTrial_Blog

Wednesday 2 December 2015

My part in the making of WiFi



Between 1994 and 1996 I was working at CSIRO Radiophysics (which turned into Telecommunications and Industrial Physics). Terry Percival was my boss' boss, and Diet Ostry and I shared an office. This story happened just a little bit before Terry, Diet and the two Johns had starting applying the radio signal unsmearing algorithms that CSIRO ended up with patents for which formed part of the WiFi standard.

One day Dr Percival set me (fresh-faced, obnoxious, know-it-all graduate) the challenge of solving the hardest problem in radio communications at the time: how can A and B communicate reliably, if A can't detect C's signal, and C can interfere with B?

My thoughts on the matter was that everyone was mis-stating the problem. It's only going to be a serious problem if you want to broadcast at 2.4 GhZ. If you drop the frequency of the signal down to something so low that even an iron ore mountain is transparent to it, it would be a very strange environment where A & C couldn't communicate.

So therefore, the real problem was that we were trying to do high speed networking. On the contrary, what we should be researching is extremely low-speed networking. How could we have useful and reliable communication at only a few bits per second?

Latency, jitter, high-speed CPUs to perform processing -- all these hard problems go away when you are only dealing in bits per second.

There were three other very good reasons why I thought low-speed networking was the right thing to look at, too: Linux, mining and submarines.

At the time, Linux was just making inroads into our thinking. The business world was dominated by IBM mainframes and (even in 1996) Windows 3.11 crashing was a daily experience for most people's workday.

The prevailing opinion that the team in the signal processing wing of CSIRO Radiophysics developed was that source-available (free-to-modify) software was unstoppable, and in a short time would conquer everything else, particularly Microsoft. After all, if the source was available, the program could never truly become unavailable or die, like proprietary software would. Software distribution bloat was about to go away, because we would all be getting our software in source form and compiling it. The days of elegant software that did exactly what it was supposed to without cruft were just around the corner because of the massive growth in volunteer developers who would tidy up anything and everything.

Which led me to the conclusion that we wouldn't really need high speed networks. The future was going to be everyone having these extremely reliable, high performance desktops (32-bit Linux never crashed; and the difference in this and also in performance was night and day compared to 16-bit Windows 3.11). All the software we would ever want would already be on our local harddisks -- all of it free -- and that there simply wouldn't be enough "stuff" to send over a network to even justify upgrading existing 9.6k modems. (I used to dial in on a 2.4k modem most of the time, myself).

I had been working on a related geophysics project as well. It was deployed on Linux (tying into the future-of-operating-systems theme) and deployed radio transmitters and receivers down boreholes in order to draw conclusions about the kinds of rocks in a region. It seemed like geophysical technologies were going to be a significant part of Australia's research future (at least I got something right!), and the need to deliver communications down into mines (where very low bandwidth would be inevitable) seemed like a worthwhile research direction.

The issues with the Collins class submarines at the time (including: how do we communicate with a submarine deep underwater?) made it seem to me like all the arrows were pointing at low-speed rather than high-speed communication.

I was so convinced that Terry and Diet (and John Deane, who was just down the corridor; and John O'Sullivan whom I think I interacted with a couple of times) were on the wrong track that I ended up quitting CSIRO and joining a private consultancy. This probably diverted me away from academia altogether which is where I otherwise would have gone. With the funding cuts that have hammered Australian research in the last few years, I'm kind of glad about this.

And it was fortunate for everyone else that I quit; I suspect I would have been a pain to work with if I'd stayed, and I'm sure I would have tried (probably unsuccessfully) to push the research in all the wrong directions. I suspect that I might have done such a bad job on the team that they might well have never made any progress to what we now call 802.11b Wi-Fi. On this basis, can I claim that I played a role in the creation of WiFi? By leaving and letting the team hire someone who actually had a clue what they were doing?

I'd like to say that I learn from my mistakes.

Before I got the private consultancy job, I applied for a quant-like role at County Natwest which in the end I turned down (again another lucky save given their history later). I was asked how I thought that County Natwest could make use of the Internet. My answer was that since no-one in their right mind would transfer money over the internet, that all it could be was an information portal.

A decade later (in 2007) I left Google because I was fairly convinced that it was going to fall apart in a few years as Wikipedia became ever more trustworthy that it would become everyone's first point of call for search. It's 2015 now as I search using Google over my home WiFi connection from a proprietary operating system: I have to admit that Bill Gates, Eric Schmidt and Terry Percival were right, and I was wrong.

Based on this, feel free to ignore anything in this blog that you disagree with, since it's almost definitely wrong. But I still think I'm right when I say that my my book of nerd-geek poetry has the best poems about nuclear physics you'll ever see. (And some fun stuff with robots, AI, first contact, and all sorts of other topics. There's even a vampire-at-the-blood-bank.) You really should go and buy it for yourself or your nearest and dearest nerd-geek friends. Here's the Amazon link: When Medusa went on Chatroulette.

Friday 27 November 2015

Buying Data Protector licenses and support contracts online

Over the next few days you will see my Data Protector online store add some new products:

As it turns out, even though they are sold from store.data-protector.net, these Training Units can be used for just about anything (e.g. the new HP Records Manager (HP RM) /  TRIM e-learning modules, face-to-face courses at HP, VILT classes.)

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector, or visit the online store for Data Protector products, licenses and renewals athttp://store.data-protector.net/ 

How to avoid getting lumped with the tickets that take forever to resolve -- Sydney Atlassian Devops Groups

For those that were (and those that weren't) at the Sydney Atlassian Devops Meeting last night, my presentation is here: http://prezi.com/ukofmue_rgf0/

Devops poetry

I gave a talk last night at the Sydney Atlassian Devops Meetup, and was inspired to write something for it. If you like this, you'll enjoy When Medusa Went on Chatroulette -- go and pre-order it today!

Everyone I know in Devops

Perl is no worry, and I grok Ruby gems;
I've wrestled a Python or two;
I've built an app (along with a friend)
And run a VM with Haiku.

I've written Java and downloaded Go;
And Haskell, I love -- it's a joy.
Ansible, Salt and Chef, those I know:
I'll debug a Cloudfront deploy.

I've found a bug in an enterprise app,
By strace'ing through its coremem dump.
I've patched up live a signal break trap
To fix up a subroutine jump.

Storage and routers and syscalls are fine,
Also VOIP service call groups.
Mapping, reducing, data refined:
I set up the office Hadoop.

I am SRE -- I will fix anything:
Deploying it twice in an hour,
Split loads across sites based upon ping,
Google is awed at my powers!

So why am I stumped and lost more or less
With a bug that I cannot mend?
Please help if you can, because I confess
Yes, docker is crashing again.

Wednesday 25 November 2015

When Medusa went on Chatroulette

And now for something completely different: my poetry collection is now available for pre-order from Amazon.

They are all happy and cheery poems about technology, geekdom, nuclear physics, time travel, first contact and all the really important things in life. There are no soppy love poems, and no heart-felt tales of loss. These poems will make you smile and laugh and then ponder for a little while if you feel like it. At the very least, if you're a geek-nerd kind of person, they will lighten up your day.

It's a nice Christmas present for the geek in your family. Or a colleague. Or you can treat yourself.
When Medusa went on Chatroulette

Pre-order link: Amazon.

Warning: these poems are totally unsuited to liberal arts majors as they won't understand most of them. Recommended for science and sci-fi fans only.

Send me your HP Data Protector support contract renewal

I've made some arrangements with HPE about support contracts for Data Protector . In short, if you are about to renew your support contract -- or you think it is due soon -- send me an email (gregb@ifost.org.au) and I'll look at it.

In the past, I've often worked with customers to explain what it is that the support contract covers, identify whether there are any unnecessary line items (quite often there is) and suggest what to renew and what not to renew.

What's changed is that I can now negotiate slightly better prices on your behalf as well. So even if your contract is perfect, I should still be able to save you some money.


Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector, or visit the online store for Data Protector products, licenses and renewals at http://store.data-protector.net/ 

VMware ESX 6.0 bug with CBT



VMware has announced another CBT problem. Just a reminder, this is not a problem that HPE can do anything about in Data Protector -- it's a problem with the APIs that VMware have supplied for HPE to use.

If you are doing VEAgent backups of your VMware environment (which is quite common) and you have any incrementals scheduled (also quite common), and you are running ESX 6 (which is lots of people) and you are using CBT (which you really, really would want to do normally).... then you might want to be aware that (yet again) VMware have announced that your backups could well be painfully broken.

Here's VMware's KB article:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2136854

There are several solutions:

  • Only do full backups. Hmm, that's a lot of data. Probably OK if you are going to a StoreOnce dedupe, but that's going to turn into a lot more tape.
  • Turn off CBT. Ouch, that's going to hurt performance.
  • Downgrade to ESX 5.5. I don't see anyone doing that.
  • Using the DataProtector disk agent and automated disaster recovery module. This is actually cheaper (no extension licenses required!) and gets you both a file-level backup and an ability to restore a virtual machine from nothing. I recommend this as a better approach generally, but particularly now when we can't trust our VM-level backups.
  • Apply the patch that VMware has now released.
Less easy solutions, but things to think about:
  • Migrate all your virtual machines to Amazon machine images. (Or Google, or Azure. Pity it can't be HP any more). It's inevitable -- eventually -- that the economies of scale of the large cloud providers will overtake your ability to run things in your own data centre. So why not start planning for it now?
  • Use a different virtualisation solution. This is not the first time that VMware have announced "by the way, all backups are broken". I suspect it won't be the last time either. KVM is very mature now and it's also free. Xen is in good shape too. Virtualisation technology is no longer cutting edge -- it's commoditised now. So why not pay commodity prices?

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector, or visit the online store for Data Protector products, licenses and renewals athttp://store.data-protector.net/ 


Tuesday 24 November 2015

Three rules of thumb that say "this job is going to take ages"

As you might know, I've written a robot called AEED which looks at work tickets and predicts how long that job is going to take. I designed it to solve the black-hole problem of service desks: instead of "we'll get back to you", it's "your request will take around 4 days". If you happen to run JIRA in the Atlassian cloud (here's the marketplace link) or HP Service Manager it's definitely worth a look.

As it turns out, it's also giving surprisingly good predictions for software development and other kinds of projects too.

Anyway, some fun: now I've got some comprehensive data across a good number of organisations.

Rule of Thumb #1: Each hand-off adds about a week. 


Initially I didn't believe this: whenever I've done any sysadmin work if someone ever assigned a ticket to me and it's not something I could do anything about, I would just pass it on to someone more appropriate as soon as I saw it -- a few minutes at most.

But that's not quite it, is it? The ticket sits in your queue for a while before you get to it. Then you second guess yourself to wonder whether it really might be something you are supposed to handle and waste a bit of time investigating, during which time you will get interrupted by something more high priority, and so on. Then you might need some time to research who it is supposed to go to.

I've seen numbers as low as 4 days per re-assignment in some organisations, stretching up to two weeks for others.

What percentage of tickets that have been re-assigned in the past are correctly re-assigned this time?


Whenever an organisation gets bigger than Dunbar's Number, the mis-assignment rate starts shooting up (as can get re-assigned absurdly large numbers of times, as in the chart above). Which makes sense: no longer does anyone know what everyone is doing. (I've got a long-ish presentation about this).

Implications: 

  • Do you know exactly who is going to do this task? If not, add a week at least.
  • Is your organisation so large that it takes time just to find the right person? If so, sign-up to the next beta of AEED. 


Rule of Thumb #2: Meetings slow things down.



I didn't expect that just having the word "MEETING" appearing in a JIRA ticket would be a signal for a long duration. But, wow does it ever make a difference: 8-10 standard deviations of difference in how long the ticket will take!

The actual effect is often only a few days extra, but it's a very, very definite effect.

Some people might interpret this as saying "don't have meetings, meetings are bad". I don't think this is what is going on here; if a customer or tech is writing a work ticket and mentions a meeting, it probably means that the issue really can't be resolved without a meeting. If an INTP or INTJ computer geek says that a meeting is required, it's very unlikely that they are having a meeting for the sake of having a meeting.

But, any meeting has to fit into everyone's work schedules, and that can introduce delays.

Implications:

  • Do you need to meet anyone in order to complete this job? Add a few days.
  • This is the kind of thing that AEED tells you. Install it now while we're still in beta and not charging for it.

Rule of Thumb #3. Not much gets done on Wednesday

I was digging for data that showed that Friday afternoon is the worst time to raise a helpdesk ticket because everyone would be slacking off. It turns out not to be the case at all: in all the organisations I've got, everyone is more-or-less conscientious. Perhaps they make a Friday afternoon ticket a top priority for Monday, or otherwise make up for lost time.

But Wednesday? It's not true in every organisation, but it was there for quite a few.

I can only guess why. My guess it that people have less time on Wednesdays because they are called into meetings more. Nobody would organise a meeting for a Friday afternoon if they can avoid it; likewise Monday morning is often avoided. But need to call a meeting on a Wednesday? No problem!

Does anyone have any data on this? Maybe Meekan or x.ai might know, or anyone from the Google Calendar team? Or does anyone have any good suggestions of the Wednesday effect?

Implications:

  • Assume you have a 4.5 day week. Pretend that Wednesday is a half day, because that's about all the work you will get done on it.


Greg Baker (gregb@ifost.org.auis a consultant, author, developer and start-up advisor. His recent projects include aplug-in for Jira Service Desk which lets helpdesk staff tell their users how long a task will take and a wet-weather information system for school sports.

Friday 13 November 2015

My CMDB book is inappropriate

I was contacted by Apress, who wanted to publish A Better Practices Guide for Populating a CMDB. But after their editorial team reviewed it, it was deemed "inappropriate".

So if I've offended anyone because of my inappropriate writing, please accept my apologies.

Anyway, nearly two years later it's still sitting at #70 on Amazon.com.au in its category, so even if Apress don't want to sell it -- and you are interested in buying a highly inappropriate book (PDF, Kindle, etc.), then have a look at http://www.ifost.org.au/books which has links to everything I've written. Here's the spiel:
This guide is an in-depth look at what you should and should not include as configuration items in an IT Services model. Unlike many other books on this topic, this goes into deep technical detail and provides many examples. The first section covers some useful approaches for starting to populate a CMDB with high-level services.
  • What is the least amount of work you can do and still have a valid CMDB? 
  • What are some techniques that you can use to identify what business and technical services you should include? 
  • Can anything be automated to work more efficiently?
The second section covers three common in-house architectures:
  • LAMP stack applications
  • Modern enterprise web applications
  • Relational databases, with some brief notes about other forms of database
The final section details how to model applications delivered through the cloud, and what CI attributes can be useful to record. This section covers the three main types of cloud-delivered application:

  • Software as a service applications, using Google Apps as an example.
  • Platform as a service applications, using Google App Engine as an example
  • Infrastructure as a service applications, using the Amazon and HP infrastructure.











Australians just dig stuff and grow stuff, and don't do tech. Here's a suggestion from the Future Party

Let's mandate farm-to-consumer tracking. 

That is, as consumers of food we would like to know -- as would approximately 1 billion Chinese -- that the food we're eating has come from a healthy, responsible, sustainable farm. We would like to see a QR code on any packaging that links through to the ingredients, and where they were sourced from, and so on. For non-packaged (e.g. fruit and vegetables) there could be some other kind of sign up to give that information.

That will mean that:
  • Australia will lead the world in trustworthy food supplies, which will be a very, very big deal in the future.
  • Provide plenty of fodder for startups to provide the technology to make it happen.
  • Make Australian food just a little less competitive in the world market, easing the pressure on the dollar which will be buoyed up in the near future heavily by Chinese food purchases.
  • Encourage farmers to be more proactive in automation technology.
Fighting against the tide of "Australia just digs and grows stuff" is too hard: so we should aim to dig and grow stuff better than anyone else. There are fully-automated (robots only) farms that are servicing Woolworths, Coles and Aldi today. There's no reason Australia can't lead the world in FoodTech startups.

Thanks to Markus Pfister.

Wednesday 11 November 2015

An Atlassian story

A little while back I was talking with two Daves.
  • Dave #1 is the CIO of a college / micro-university
  • Dave #2 is on the board of a small airline. 
Because I tend to get involved in non-traditional projects, they often ask me what I'm working on, probably out of amusement more than anything else. At the time I was building a service catalog for Atlassian (and now I have an awesome plug-in on their marketplace).

Neither had any idea of who Atlassian is or what it does, which was no surprise to me. I don't quite understand why an Australian company with $200m+ in revenue and a market cap in the billions which isn't a miner, telco or bank isn't memorable.

Still, the tools Atlassian makes are mostly used by software developers, and my circle of friends and acquaintances doesn't include many coders so "Atlassian makes the software to help people make software" isn't a good way of describing what they do.

Other than a brief stint at Google, I haven't been a full-time employee or even long-term contractor in any normal company this century, so instead I talked about what's unusual and different at Atlassian based on all the other organisations I've worked with. I talked about how the very common assumptions about the technologies to co-ordinate a business are quite different here.

Type of communication Corporate default
(aka "what most companies do")
What is generally done at Atlassian
Individual-to-individual Email or talk over coffee. HipChat @ the individual in a room related to the topic
Individual-to-group Teleconference / webinar, for all matters big or small. A comment in the HipChat room. Or sometimes Google Hangouts and HipChat video conference if it's something long and important.
Reporting (project status, financials) Excel document or similar Confluence status page or JIRA board
Proposal Word document or Powerpoint presentation Confluence page
Feedback on proposals Private conversation with the person who proposed it, or maybe on another forum page somewhere else on their sharepoint portal. The discussion in the comments section of the confluence page.

And yeah, my secret superpower is to be able to narrate HTML tables in speakable form. There was a lot of "on the one hand... on the other hand at Atlassian..."

Note that the Atlassian column is all public and searchable (in line with being an open company), and the "default corporate column" is not. Also, in the Atlassian column, you opt-in to the information source; in the "traditional company" column, the sender of the information chooses who to share it with.

Why is this interesting? Because the Atlassian technologies and the Atlassian way of doing things is an immune system against office politics.

In order to get really nasty office politics, you really need an information asymmetry: managers need to be able to withhold information from other managers in order to get pet projects and favourite people promoted, and for others to be dragged down by releasing information at the worst possible moment when the other party can't prepare for it. (Experience and being middle-aged: you see far too much which you wish you hadn't.) When information is shared only with the people you choose to share it with, that's easy to do.

At Atlassian, it's not quite like that. Sure, there are still arguments and disagreements. Sometimes there is jostling for position and disagreements about direction and there are people and projects we want to see happen. And not every bad idea dies as quickly as it should. But because other interested parties can opt-in as required for their needs it's a very level playing field and ---

And that's where Dave #1 cut me off, shook my hand and said, "Wow. Thank you. That's exactly it. That's EXACTLY it."

Dave #2 just nodded sagely. "Yes," he said, and then again more slowly: "yes."

Summary: Atlassian sell office-politics treatments.

Greg Baker (gregb@ifost.org.auis a consultant, author, developer and start-up advisor. His recent projects include a plug-in for Jira Service Desk which lets helpdesk staff tell their users how long a task will take and a wet-weather information system for school sports.

Monday 9 November 2015

Checking StoreOnce stores on Windows

In Data Protector 9.04, I've encountered a problem occasionally where the StoreOnce software store on Windows is completely unresponsive.

The error message that you will see in the session log is unpredictable, but it will often look something like this:

[Major] From: BSM@cellmgr.ifost.org.au "IFOST backup"  Time: 9/11/2015 1:18:44 PM
[61:3003]      Lost connection to B2D gateway named "DataCentrePrimary"
    on host storeonce.ifost.org.au.
    Ipc subsystem reports: "IPC Read Error
    System error: [10054] Connection reset by peer
"

One of the ways of detecting the problem was that the command "StoreOnceSoftware --list_stores" would hang.


I created the following three batch files and scheduled CheckStoreOnceStatus.cmd to run once per hour:

CheckStoreOnceStatus.cmd
start /b CheckStoreOnceStatusController.cmd
start /b CheckStoreOnceStatusChild.cmd
waitfor /t 600 fiveminutes
exit /b
CheckStoreOnceStatusChild.cmd
StoreOnceSoftware --list_stores
WAITFOR /SI StoreOnceOK

CheckStoreOnceStatusController.cmd
WAITFOR /T 30 StoreOnceOK && (
  REM StoreOnce OK
  exit /b
)
REM StoreOnce failure
net stop StoreOnceSoftware
waitfor /t 120 GiveItTime
net start StoreOnceSoftware
exit /b


Actually, I also added a call out to blat to send an email after the net start command.

So, CheckStoreOnceStatus spawns off *Controller, which will wait for 30 seconds for a signal to arrive from *Child as soon as child has been able finish StoreOnceSoftware --list_stores.

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector

Sunday 8 November 2015

How to launch without venture capital


  • 99.8% of all startups never raise any external capital
  • Whether you raise capital or not makes absolutely no difference as to whether the startup is successful or not.
(Both facts from the QUT CAUSEE study).

It was pointed out to me recently that after 16 years of running businesses and having launched quite a few start-up technology products in that time -- without ever raising capital -- that I should try to package up my approach for other people to use.

So if you want a 2 hour Skype or Google Hangouts session with me (for AUD300+GST), just get in contact (gregb@ifost.org.au or greg_baker on Skype) and we can arrange a time. What I promise to give you is at least two or three good ideas that might get you going without having to spend stupendous sums of money.


Thursday 5 November 2015

Pre-mortem for almost every cloud-hosted backup provider

I was talking to a vendor who wanted me to partner with them on their cloud-hosted backup solution. I looked at their pricing, and their offerings (managed storage in the cloud, DR servers in the cloud) and then compared what they could do with Amazon. Since only Google and Microsoft can compete with Amazon's scale (and then, only just), the vendor's offerings were way out of line with market rates now.

I suggested that they had three options:

  • They could make their product work nicely with Amazon cloud (i.e. backup to S3, manage the migration to and from Glacier). A variation would be to do this with Google Nearline Storage, which is probably a better solution, even if it doesn't have the same name recognition. They will lose a lot of revenue because there used to be margin in online storage -- but there isn't any more.
  • They could migrate their entire customer base to an open source option (Bacula or BareOS). Since their customer base is going to be cannibalised anyway, they might as well make some money from the consulting effort migrating the customer somewhere else. Open source backup can still compete against cloud offerings in a couple of different ways.
  • They could become roadkill.
Fortunately, they do have other sources of revenue, so hopefully they will be able to carry on. But for other specialist cloud-backup companies? I'm not sure that they many of them have a viable future.


Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector

Thursday 29 October 2015

Ideas to kick start innovative thinking

A school friend asked me last night about how to build a unicorn-scale business from a relatively small amount of capital. Essentially it's a question of innovation -- given the worst problem in your industry, or the area that most begs for disruption, how do you kick start your brain to think of better solutions? 

So far, I've come up with six thought triggers.
  • The etherealisation of devices. For example, 20 years ago, medical devices consisted of very expensive sensors with a bit of dumb logic to give a result. Today, it's a race to the bottom on dumb sensors with more and more sophisticated logic. What's the cheapest thing you could use to take a measurement?

  • The availability of motion and position sensors. How would it help if you knew exactly where data was entered? Or if you knew how your physical product moved in real-time?

  • Explore some of the demonstrations on http://www.alchemyapi.com/products/demo (now part of IBM Watson). In particular, look at their ability to do sentiment analysis -- to give you a summary of how people are feeling about topics. What could you do with a summary of everything your customers say about you?
  • Most people don't realise how good speech recognition has become as their only experience is Siri (which is very, very good) and they don't realise how general it can be. Try out a demo of Dragon Dictate, or use "Ok, Google" on an Android phone. What would you do differently if you could run your business on spoken information?
  • Get a feel for how conversational bots are working. Look at this demo of an agent that schedules meetings: https://meekan.com/hipchat/  . There have some competitors, such as Amy from http://x.ai/ . Do the interactions you have with your customers follow some formulaic approach? How would you do things differently if the initial engagement with your customers was handled by a bot?
  • There are generational differences in the way we run businesses (here's a blog post about a very interesting company that I did some work for: http://blog.ifost.org.au/2015/04/gen-y-businesses.html). What are the assumptions you make because you expect to interact via email?
Greg Baker (gregb@ifost.org.au) has worked with numerous startups -- http://www.ifost.org.au/startups/ for a short list -- as a developer, manager and consultant. He also runs mentoring sessions helping entrepreneurs wanting to bootstrap their businesses without venture capital or angel investments.

MS-SQL server not backing up

When Data Protector tries to backup a SQL server, the SQL agent contacts the cell manager to get the details of the integration (e.g. what username to use, whether to use SQL authentication or Windows authentication).

Today I was diagnosing an error message that I never would have expected to see on a MS-SQL server:

Cannot obtain Cell Manager host. Check the /etc/opt/omni/client/cell_server file and permissions of /etc/resolv.conf file.

I'm not exactly sure how I'm supposed to check /etc/resolv.conf on a Windows system. Maybe C:\Windows\system32\drivers\etc\resolv.conf?

Needless to say, in the agent's extreme confusion, it then followed this with:

Cannot initialize OB2BAR Services ([12:1602] Cannot access the Cell Manager system. (inet is not responding)
The Cell Manager host is not reachable or is not up and running
Not exactly that inet is not responding -- it didn't even know what cell manager to connect to.
We probably would have searched for hours trying to find the cause of it, but I proposed upgrading the client to match the cell server version (good practice anyway). The upgrade wouldn't proceed after we hit the following error message:
"Already part of another cell: cellmgr.ifost.org.au ." Note the extra space!
Staring very carefully at the following registry key, I confirmed that indeed, there was a space at the end of the name where it had been manually edited. I removed the space.
HKEY_LOCAL_MACHINE\SOFTWARE\Hewlett-Packard\OpenView\OmniBackII\Site\CellServer

Normally the "already part of another cell" error message means exactly what it says (because it won't match up with your cell manager's name); that there's a short name instead of a FQDN; or some sort of problem like that.

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector

Most extreme Ticket to Ride "short journey"

Over the holidays I was playing the very enjoyable board game: Ticket to Ride Europe.

My primary-school age son was creating the strangest combination of railways. He handed dozens of cards in his hand through most of the game, and we were vaguely wondering what he was doing and whether he had understood the game.

At the culmination of the game -- when everyone's required journeys are unfolded, he proudly announced that he had completed one of his short journeys (Barcelona to Brussels). When we asked him to show us, he walked us through surely the most arduous and inefficient way of doing it. First the passenger goes across Spain and south down the Italian peninsular. Then a ferry run takes them over to Athens. Then north from Athens into Kiev, and finally west from Kiev to Brussels via Denmark.

CentOS Data Protector agent unable to be installed

A customer asked me today for some help with a CentOS server that wouldn't install properly, despite everything looking OK.

The session went like this.

# ./omnisetup.sh -server cellmgr.ifost.org.au -install da,autodr
Cannot access the Cell Manager system (inet is not responding)....

As the cell manager was known to be working, we didn't need to check connectivity to 5555 on the cell manager. I suggested just running the installation (without the cell manager import first)

# ./omnisetup.sh -install da,autodr

This worked fine. Was the disk agent listening?

# netstat -an | grep 5555
tcp6   0   0   :::5555 :::*    LISTEN

That's odd: why IPv6? The cell manager had IPv6 disabled, so that would certainly have stopped things working.

# grep FLAGS /etc/xinet.d/omni
FLAGS = IPV6

That's that one explained... use your favourite editor (vi, nano, emacs, gedit...) to set FLAGS = IPV4 if you happen to encounter it. (Don't forget to run service xinetd restart )

But things still weren't working: CentOS has a host-based firewall. As we didn't have a media agent, the only relevant port is tcp 5555.

# firewall-cmd --add-port 5555/tcp --permanent
# firewall-cmd --reload

And then everything worked correctly.

Thanks to Glen Thompson for doing most of the work investigating this one!

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector

Tuesday 27 October 2015

Vote for me and the robot overlord party!

Well, actually vote for my estimation robot.

Nobody enjoys figuring out how long it's going to take to program a new feature, or close a customer support ticket, or do a step in a project -- so let's had it over to the robots!

If you run Jira in the cloud, just click here to install it in your instance: https://marketplace.atlassian.com/plugins/au.org.ifost.aae

And if you think that it would be awesome to have better software estimates, and better predictions of how long projects and work will take, then vote!
http://devpost.com/software/automated-estimator-of-effort-and-duration

P.S. Devpost has all sorts of interesting and exciting competitions. It's worth signing up just to get a feel of where the future is headed.

Thursday 22 October 2015

HP gives up against Amazon

So the HP public cloud is no more. I suspect I might have been one of the larger users of it (for a few weeks back in 2012) so let me try to give a serious analysis of what this means. (HP announcement link)

Amazon AWS is currently supply-constrained. They could lower prices to gain more customers, but then they wouldn't be able to service those customers. This is an unusual position to be in, as almost all of us are in industries where the bottleneck to growth is in acquisition, not delivery. So they ease their prices down little-bit-by-little-bit as they resolve their supply constraints.

Eventually, AWS will start to be demand-constrained, and that's when all hell breaks loose, because then AWS can start doing some serious price cutting. I'd peg it for early 2017 at a guess, when suddenly the price cuts start accelerating until the economics for renting from AWS starts to look competitive with buying a server and putting it on a desk unsupported, un-networked and unpowered.

Google and Azure can survive Amazongeddon -- they have the money and it's a market that they definitely want to be in. Google App Engine is still a very cost-effective offering -- my total compute and storage budget leading up to launch day (and including it) for the Automated Estimator of Effort and Duration for Jira was $0.22 -- so much for big data analysis being expensive! At that level, price comparisons are utterly meaningless, so if that's profitable now (which is probably is), they can keep doing it.

HP have presumably decided that they don't have enough time to build out a solid customer base on the HP public cloud before Amazongeddon. The HP cloud team is betting that customers will want HP software to manage their clouds, and that an HP-backed public cloud is not worth doing. Operations Orchestration makes sense in a cloudy world, for example.

But there is a problem, because for all the talk of "hybrid public-private clouds", either private is cheaper/better/more secure or public is cheaper/better/more secure.

  • If the answer is "private", then we will continue to have internal customer-owned datacentres, and HPE will continue to sell 3PARs, SureStores, Proliants and so on. 
  • If the answer is "public", then after Amazongeddon, HP won't have a hardware business that anyone cares about.

Unfortunately, I believe the answer is "public", as do many, many other people. To say that "private" clouds are cheaper / better and more secure the majority of the time means that not only are there no economies of scale in a big data centre, that there are diseconomies of scale that are going to appear any moment now from out of nowhere.

This puts HP in the same position as Unisys was in the 1980s-1990s. Customers stopped buying Unisys mainframes, so Unisys had to turn into a services, software and support business. They had a bit of an edge in government and defence at the time, and they worked hard to keep it. I know plenty of people who have had good careers at Unisys, and presumably it's a nice place to work where there is innovation happening. But Unisys in 2015 is not the hallowed place that it was after the Burroughs / Sperry merger.

Without that core of hardware sales on which to stack software sales, Unisys struggled. So too will HP. (And so will Dell, unless Dell decides to take on Amazon... which they could and should.)

I feel sorry for Bill Hilf though, as he has had to lead teams through the collapse of high-end Itanium hardware and now through the failure of the only viable hardware future that HP had.

That said, I'm optimistic about HP Data Protector in particular. There will still be important data to backup and archive. Storing it efficiently for fast recovery will always matter. You can't discard a backup solution until the last of your 7-year-old backups have expired.

I'm hoping that HP will now do three things:

  • Convert the HP cloud object storage device to something that works with S3. Since this feature will be irrelevant in January 2016 if they don't do this, it seems like a no-brainer in order to preserve the R&D investment done so far.
  • Interface into lifecycle management of S3 -- if the "location" of a piece of media is "Glacier", then Data Protector should be able to initiate its re-activation as step 1 of a restore job. Again, this seems a no-brainer if you already are dealing with S3.
  • I'd like to see the Virtual Storage Appliance delivered as an AMI (Amazon machine image). This isn't very difficult. Maybe there could be some fiddling around with licensing where the VSA reported its usage and customers paid by capacity per month, but even that's not really necessary.
If all this happens, then I suspect we'll continue to see HP selling Data Protector for another 30 years. If Data Protector is still useful for customers post-Amazongeddon as it is pre-Amazongeddon, then there would be no particular reason that Data Protector couldn't pass through this critical tipping point. In fact, since I doubt that BackupExec will handle the transition, Data Protector will probably pick up some market share.

Anyway, what are some immediate scenarios would this support?
  • Customer A has a small Amazon presence and a large data centre with a StoreOnce system and some tape drives. They would like to deploy a VSA in the same region as their Amazon servers and replicate their data through low-bandwidth links back to their data centre. 
  • Customer B has a somewhat larger Amazon presence. They have Data Protector in their office, and they want to backup their Amazon content to Glacier. 
  • Customer C is closing down their data centre in house and moving their servers into the cloud. They want to take backups of their servers in their data centre and use StoreOnce replication to get them into their cloud where the data is rehydrated.
So if you are customer like A, B or C, feel free to contact to your account manager, suggest that you'd really like Data Protector to support you and see how you go. (Or get in touch with me and I'll collate some answers back to the product team.)


Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (
http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector


Monday 19 October 2015

Data Protector reporting with Navigator through a firewall

Data Protector has a number of built-in reports, which you can email, put on an intranet, pipe through some other command and various things like that. I wrote up the complete list of built-in reports here:
http://blog.ifost.org.au/2015/04/data-protector-built-in-reports.html

HP's strategic direction for reporting appears to be Backup Navigator. It is licensed purely on the capacity of the cells it is reporting on. (In other words: how big is a full backup of all the data being backed up: that's the capacity you license on.)

It produces some nice reports:








I was working with a customer who had had some problems with connectivity on Navigator 9.1.

Their cell managers had an omnirc file which limited the number of ports open for connections.

OB2PORTRANGESPEC=CRS:20495-20499

Having only five ports open was enough for them. We organised to have ports 20495 - 20499 opened from their Navigator server to their cell manager.

As it turns out, this is not enough. You need to have connectivity open from the cell manager back to the Navigator server as well. This isn't documented anywhere, and there's no error report from Navigator about this.

This problem goes away somewhat in Navigator 9.21 and 9.3 because you can do agent-based push. This is where you run a program on your cell manager which connects to Navigator on port 443 (HTTPS) and uploads the information that Navigator needs.

This solves the problem in two ways:

  • With the new agent model, there's no need to open anything from the Navigator server to the cell manager, so you can put the Navigator server in a quite isolated network.
  • It's entirely possible to run the Navigator server in the cloud (HPE offer a three month trial) and have your Data Protector reporting handled by a third party.

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector

Thursday 1 October 2015

Are you having problems with estimates for projects and tasks?

I've developed a plug-in for Jira that uses machine learning to predict how long it will take for a ticket to be closed, and how much work effort will be required. Based on my current data, I'm getting 50% of tickets predicted correctly within a factor of 2, which is I think better than most human beings can do.




I'm looking for beta testers to confirm that it is all working as it should. If you are currently using Jira in the Atlassian Cloud (Agile, Service Desk or just vanilla) this will be a one-click add-on.
Contact me (gregb@ifost.org.au) if you are interested in trying it out.

Friday 25 September 2015

The cheapest possible Data Protector support contract

Obviously, HP would like everyone to keep using Data Protector for as long as possible, and to maintain a full support contract on everything.

But let's say you've decided you don't want to keep using Data Protector. First up, I'm sure HP would love to hear from you to understand your issues. If you can't get the ear of anyone at HP, tell me and I can pass on feedback.

Now, you will end up in a situation where you have a history of backups that you might need to restore from, but without maintaining a support contract, if something goes wrong, you don't have anywhere to turn to.

So what's the cheapest way that you could keep a supported Data Protector environment? I think this would be with a copy of Data Protector single-server edition (B7030BAE). Last time I checked it was around AUD550 which included a one year support contract. The support contract part of this is only about AUD100 per year.

This gets you access to any released patches and also an escalation path in case one of those restores doesn't work.

There are a few limitations to single-server edition, but if you can live with them, this might be the way to go.

Greg Baker is an independent consultant who happens to do a lot of work on HP DataProtector. He is the author of the only published books on HP Data Protector (http://www.ifost.org.au/books/#dp). He works with HP and HP partner companies to solve the hardest big-data problems (especially around backup). See more at IFOST's DataProtector pages at http://www.ifost.org.au/dataprotector