Search This Blog

Showing posts with label Atlassian. Show all posts
Showing posts with label Atlassian. Show all posts

Wednesday, 3 May 2017

Chatbots that get smarter at #AtlassianSummit

I’m in Barcelona at the moment, at AtlasCamp giving a talk about helpdesk chatbots that get smarter.

It’s easy to write a dumb chatbot. It’s much harder to write a smart one that responds sensibly to everything you ask it. Some famous examples: if a human mentions Harrison Ford, they a probably not talking about a car.
There are three different kinds of chatbot, and they are each progressively harder to get write.
  1. The simplest chatbots are just a convenient command-line interface: in Slack or Hipchat, there are usually “slash” commands. Developers will set up a program that wakes up to “/build” whenever it is entered into a room that pulls the latest sources out of git, compiles it and shows the output of the unit tests. Since this is a very narrow domain, it’s easy to get right, and as it is for the benefit of programmers, it’s always cost-effective to spend some programmer time improving anything that isn’t any good.
  2. The next simplest are ordering bots, that control the conversation by never letting the user deviate from the approved conversational path. If you are ordering a pizza, the bot can ask you questions about toppings and sizes until it has everything it needs. Essentially this is just a replacement for a web form with some fields, but in certain markets (e.g. China) where there are near-universal chat platforms this can be quite convenient.
  3. The hardest are bots that don’t get to control the conversation, and where the user might ask just about anything.
Support bots are examples of that last kind: users could ask the helpdesk just about anything, and the support bot needs to respond intelligently.
I did a quick survey and found at least 50 startups trying to write helpdesk bots of various kinds. It’s a lucrative market, because if you can even turn 10% of helpdesk calls into a chat with a bot, that can mean huge staff cost savings. I have a customer with over 150 full-time staff on their servicedesk -- there are millions of dollars of savings to be found.
Unfortunately, nearly every startup I’ve seen has completely failed to meet their objectives, and customers who are happy with their investments in chatbots are actually quite rare.
I’ve seen three traps:
  • Several startups have lacked the courage to believe in their own developers. There’s a belief that Microsoft, Facebook, Amazon, IBM and Google have all the answers, and that if we leverage api.ai or wit.ai or lex or Watson or whatever they’ve produced this month that there’s just a simple “helpdesk knowledge and personality” to put on top of it, like icing on a cake. Fundamentally, this doesn’t work: for very soud commercial reasons the big players are working on technology for bots that replace web forms and with that bias comes a number of limiting assumptions.
  • A lot of startups (and larger companies) believe that if you just scrape enough data from the intranet -- analyse every article in Confluence for example -- that you will be able to provide exactly the right answer to the user. Others take this further and try to scrape public forums as well. This doesn’t work because firstly, users often can’t explain their problem very well, so there’s not enough information up front even to understand what the user wants; and secondly... have you actually read what IT people put into their knowledge repositories?
  • There are a lot of different things that can go wrong, and a lot of different ways to solve a problem. If you try to make your support chatbot fully autonomous, able to answer anything, you will burn through a lot of cash handling odd little corner cases that may never happen again.
The most promising approach I’ve seen was one taken by a startup that I was working with late last year. When they decided to head in another direction, I bought the source code back off them.
The key idea is this: if our support chatbot can’t answer every question -- as indeed it never will -- then there has to be a way for the chatbot to let a human being respond instead. If a human being does respond, then the chatbot should learn that that is how it should have responded. If the chatbot can learn, then we don’t need to do any up-front programming at all, we can just let the chatbot learn from past conversations. Or even have the chatbot be completely naive when it is first turned on.
The challenge is that in a support chat room, it’s often hard to disentangle what each answer from the support team is referring to. There are some techniques that I’ve implemented (e.g. disentangling based on temporal proximity, @ mentions and so on). A conversative approach is to have a separate bot training room where only cleanly prepared conversations happen. Taking this approach means that we substitute expensive highly-paid programmers writing code to handle conversations and replace them with an intern writing some text chats.
It’s actually not that hard to find an intern who just wants to spend all day hanging out in chat rooms.
Whatever approach you take, you will end up with a corpus of conversations: lots of examples of users asking something, getting a response from support, clarifying what they want, and then getting an answer.
Predicting the appropriate thing to say next becomes a machine learning problem: given a new, otherwise unseen data blob, predict which category it belongs to. The data blobs are all the things that have been said so far in the dialog, and the category is whatever it is that a human support desk agent is most likely to have said as a response.
There is a rich mine of research articles and a lot of well-understood best practice about how to do machine learning problems with natural language text. Good solutions have been found in support vector machines, LTSM architectures for deep neural networks, word2vec embedding of sentences.
It turns out that techniques from the 1960s work well enough that you can code up a solution in a few hours. I used a bag-of-words model combined with logistic regression and I get quite acceptable results. (At this point, almost any serious data scientist or AI guru should rightly be snickering in the background, but bear with me.)
The bag-of-words model says that when a user asks something, you can ignore the structure and grammar of what they’ve written and just focus on key words. If a user mentions “password” you probably don’t even need to know the rest of the sentence: you know what sort of support call this is. If they mention “Windows” the likely next response is almost always “have you tried rebooting it yet?”
If you speak a language with 70,000 different words (in all their variations, including acronyms), then each message you type in a chat gets turned into an array of 70,000 elements, most of which are zeroes, with a few ones in it corresponding to the words you happen to have used in that message.
It’s rare that the first thing a support agent says is the complete and total solution to a problem. So I added a “memory” for the user and the agent. What did the user say before the last thing that they said? I implemented this by exponential decay. If your “memory” vector was x and the last thing you said was y then when you say z I’ll update the memory vector to (x/2 + y/2). Then after your next message, it will become (x/4 + y/4 + z/2). Little by little the things you said a while ago become less important in predicting what comes next.
Combining this with logistic regression, essentially you assign a score for how strong each word is in each context as a predictor. The word “password” appearing in your last message would score highly for a response for a password reset, but the word “Windows” would be a very weak predictor for a response about a password reset. Seeing the word “Linux” even in your history would be a negative strength predictor for “have you tried rebooting it yet” because it would be very rare for a human being to have given that response.
You train the logistic regressor on your existing corpus of data, and it calculates the matrix of strengths. It’s a big matrix: 70,000 words in four different places (the last thing the user said, the last thing the support agent said, the user’s memory, and the support agent’s memory) gives you 280,000 columns, and each step of each dialog you train it on (which may be thousands of conversations) is a row.
But that’s OK, it’s a very sparse matrix and modern computers can train a logistic regressor on gigabytes of data without needing any special hardware. It’s a problem that has been well studied since at least the 1970s and there are plenty of libraries to implement it efficiently and well.
And that is all you have to do, to make a surprisingly successful chatbot. You can tweak how confident the chatbot needs to be before it speaks up (e.g. don’t say anything unless you are 95% confident that you will respond the way that a support agent will). You can dump out the matrix of strengths to see why the chatbot chose to give an answer when it gets it wrong. If it needs to learn something more or gets it wrong, you can just give it another example to work with.
It’s a much cheaper approach than hiring a team of developers and data scientists, it’s much safer than relying on any here-today-gone-tomorrow AI startup, and it’s easier to support than a system that calls web APIs run by a big name vendor.
If you come along to my talk on Friday you can see me put together the whole system on stage in under 45 minutes.

Wednesday, 7 December 2016

Artificial Intelligence (#AI) development in Sydney around #Atlassian and #JIRA

Well, it's boastful to say it, but we just received a nice little certificate from Craig Laundy (assistant federal minister for innovation) and Victor Dominello (NSW state minister for innovation). It says "Best Industry Application of AI/Cognitive" for the Automated Estimator of Effort and Duration.

Actually, we were just highly commended, and it was stratejos that won, but what is interesting about all this is: the whole AI/Cognitive category just went to artificial intelligence JIRA plug-ins.

Firstly, Atlassian has won the enterprise. When 100% of the top-tier startups targeting large organisations are developing for your platform exclusively, it's only a matter of time.

Secondly, AI is hot technology in Sydney with political capital. We think of Silicon Valley as being the centre of the universe for this, but I've never seen US Federal and Californian state senators getting together to express their commitment as we saw in this award.

Thirdly, this means you really should try out AEED: http://www.queckt.com/

Tuesday, 1 March 2016

Draining the meeting bogs and how not to suffer from email overload (part 4)


This is the fourth (and probably last) in my series of blog posts about how we unknowingly often let our IT determine how we communicate, and what to do about it.

Teams need to communicate at three different speeds: Tomes, Task Tracking and Information Ping-pong. When we don’t have the right IT support to for all three, things go wrong. This week I'm writing about Task Tracking.

I lose my luggage a lot. Domestic, international, first world, developing nations; I’ve had travel issues in more places than most people get to in their lives. I’ve even had airlines locate my lost luggage and then lose it again in the process of delivering it back to me.

Two of the more recent lost luggage events stood out by how they were handled:

  • On one occasion, a senior member of staff apologised to me in person, and promised that it he would have his staff on to it immediately; 
  • On the other, a bored probably-recent graduate took my details, mumbled the official airline apology and gave me a reference number in case I had a query about where it was up to.

I felt much more confident about the mumble from the junior than the eloquence of the senior.

Why?

Because there was a reference number. The reference number told me:

  • There was a process that was being followed. It might be a very ad-hoc process; it might often fail, but that's clearly better than the alternative.
  • If the team leader is micro-managing the tasks of their staff, it's because they are choosing to do so. The process must have been done a few times before, so the staff know what to do.
  • The team leader will probably not be the bottleneck on stuff getting done. We've all seen it, or been there: the project manager who is painfully overworked while team members are idle (often unknown to the project manager).
  • The process will therefore scale somewhat. Staff can be empowered enough that the process could scale without a single-person bottleneck.
  • It told me that there was a team of people who work on finding lost luggage, and that someone in that team would be working on it.
  • If a whole flight-load of luggage had been lost, scaling up resources to help wouldn't have been impossible.
  •  It didn’t matter to me who was working on it; if I was wondering whether any work had been done, I would have logged into their portal and looked it up, and wasted no-one’s time or effort but my own.

Having the manager’s name and assurance gave me no such confidence, but instead the sure knowledge that if I called the manager, the manager would have to get back to me (i.e. go and query the staff member he had assigned). This process would have consumed staff time and effort; which meant that if a large number of people had lost their luggage and were all asking the same question, that there would be gridlock and thrashing (so many interruptions that nothing gets completed).

In your team, who is busier? The team leader / project manager, or the technical staff doing the work?

If someone needs something from your team, how is that work tracked? Do they have someone senior that they call, or can they find out what they want to know wasting no-one's time but their own?

In a sensible, well-run organisation the staff with more seniority should have more idle time than the staff who report to them. Otherwise, they are the bottleneck holding back the organisation's efficiency.

If this is happening, then something is wrong with the way ticket tracking is being done.

The most common ticket software systems tend to be helpdesks or service desks because the scale of such organisations usually make it essential It is almost impossible to live without them in software development once the software has reached the complexity experienced in a very minimal viable product.

But ticket-tracking can be done with almost any technical team inside or outside of IT. Here's my list of the minimum requirements for a ticket-tracking system to be useful:

DUAL FACING Ticket tracking systems convey two important pieces of information: that something is being worked on (or that it isn’t) and what tasks people are working on.

On the one hand, the system needs to be easy for end-users to see where their request is up to which is why tracking tasks in a private spreadsheet doesn't work.

The ticketing system should automate messages to requesters, create follow-up surveys when the ticket is resolved, and so on.

On the other hand, the ticketing systems needs to be fast and efficient for staff to update, or else staff will batch their updates in a large chunk at the end of the day or the end of the week. It also needs to provide reporting to management to provide a high-level overview.

ENUMERATED You want discussions to be about "LTS-203" not "the project to update the website branding" so that everyone is on the same page. The tracking system has to provide a short code that you can say over the telephone or in a conversation, and that usually means some kind of short string of letters (perhaps three or four, or a pronounceable syllable) followed by a number. If that number is 7 digits long, you have a problem, because no-one will remember it, nor be able to say it.

EMBEDDABLE Whatever you are using for your tomes and reporting, you want to be able to embed your ticket information into it, and have the status there automatically update. This makes project meetings smoother and more efficient, because you can embed the ticket into the minutes and quickly glance back to last week’s meeting notes to see what needs to be reviewed. If software projects are involved, then being able to embed ticket references into the revision control system is very helpful.

UNIVERSAL If the entire organisation can work off the same ticket tracking system, that is ideal. One client I worked with had numerous $100,000+ projects being blocked -- unable to be completed -- because another (unrelated) department had delayed some logistics for efficiency. It required months of investigation to uncover -- which should have been possible to identify by clicking through inter-team task dependencies.

ADAPTABLE In order to be universal, the workflow needs to be customisable per team and often per project. For some teams, To-Do, In Progress and Done are sufficient to describe where the work is up to. For others, there can be a 20-step process involving multiple review points. ITIL projects often end up clunky and barely useable because the One Universal Process for all Incidents is really only necessary for a handful of teams, and the rest are forced to follow it.

LOCK-FREE When a project is urgent you will have more than one person updating a ticket at the same time. This isn't the 1980s any more: it's silly and inefficient for one user to lock a ticket and leave someone else idle, unable to write. Time gets lost, and more often than not, the update gets lost as well.

PREDICTIVE We live in an era of deep-dive data mining. It's no longer acceptable to say to a customer or user "we have no idea how long this is going to take" any more than an advertising company could say "we don't know how much money to spend on your campaign". And yet, I still see helpdesks logging tickets and giving no indication to their user base of when to expect the ticket to be resolved. At the very least, make sure your ticketing system uses my three rules of thumb. Or better still, make sure it works with my Automated Estimator of Effort and Duration to get the best real-time predictions.




That seems to be the most important seven criteria.

  • Spreadsheets don't work, and yet I still see them used in almost every organisation.
  • Most startups use Jira or Basecamp. Basecamp also includes capabilities for Tomes and Information Ping Pong. 
  • Best Practical's RT is the most mature of the open source tools (even though it doesn't meet some criteria). Trac is another commonly-used open source tool, particularly when it is bundled with software repository hosting.
  • Large enterprises often use HPE Service Manager (which is less popular than it was in the past and is being replaced by Service Anywhere), ServiceNow (who took a lot of HPE's marketshare) and BMC Remedy. They are generally 10x - 100x the price of Jira but are designed to work better in highly siloed organisations.

Be aware that if the nature of someone’s job is that they will work on the same problem for months or years -- for example, a scientific researcher -- there’s probably little value in ticket tracking because there would be so little information to put in there. Likewise, if someone’s job is to do hundreds of tasks per day, then any ticket tracking will have to be automated by inference from the actions or else the overhead of the tracking system might make it impractical.




I'm hoping to put this (and many other thoughts) together in a book (current working title: "Bimodal, Trimodal, Devops and Tossing it over the Fence: a better practices guide to supporting software") -- sign up for updates about the book here:  http://eepurl.com/bMYBC5

If you like this series -- and care about making organisations run better with better tools -- you'll probably find my automated estimator of effort and duration very interesting.

Greg Baker (gregb@ifost.org.au) is a consultant, author, developer and start-up advisor. His recent projects include a plug-in for Jira Service Desk which lets helpdesk staff tell their users how long a task will take and a wet-weather information system for school sports.

Tuesday, 23 February 2016

Draining the meeting bogs and how not to suffer from email overload (part 3)

This is the third in my series of blog posts about how we unknowingly often let our IT determine how we communicate, and what to do about it.

Teams need to communicate at three different speeds: Tomes, Task Tracking and Information Ping-pong. When we don't have the right IT support to for all three, things go wrong.

In this post, I'll talk about team communication Information Ping-Pong.  Information Ping-Pong is that rapid communication that makes your job efficient. You ask the expert a question and you get an answer back immediately because that's their area, and they know all about it.

It's great: you can stay in the flow and get much, much more done. It's the grail of an efficient organisation; using the assembled team of experts to their full.

Unfortunately, what I see in most organisation is that they try to use email for this.

It doesn't work.

Occasionally the expert might respond to you quite quickly, but there can be long delays for no obvious reason to you. You can't see what they are doing -- are they handling a dozen other queries at the same time? And worse: it is just contributing to everyone's over-full inbox.

The only alternative in most organisations is to prepare a list of questions, and call a meeting with the relevant expert. This works better than email, but it's hard to schedule a 5 minute meeting if that's all you need. Often the bottom half of the list of prepared questions don't make sense in the light of the answers to the first half, and the blocked-out time is simply wasted.

The solution which has worked quite well for many organisations is text-chat, but there are four very important requirements for this to work well.

GROUP FIRST Text chats should be sent to virtual rooms; messages shouldn't be directed to an individual. If you are initiating a text-chat to an individual, you are duplicating all the problems of email overload, but also expecting to have priority to interrupt the recipient.

DISTURBED ROLE There needs to be a standard alias (traditionally called "disturbed") for every room. Typically one person gets assigned the "disturbed" role for the day for each team and they will attempt to respond on behalf of the whole team. This leaves the rest of the team free to get on with their work, but still gives the instant-access-to-an-expert that so deeply helps the rest of the organisation. (Large, important teams might need two or more people acting in the disturbed role at a time. )

HISTORY The history of the room should be accessible. This lets non-team members lurk and save on asking the same question that has already been answered three times that day.

BOT-READY Make sure the robots are talking, and plan for them to be listening. If a job is completed, or some event has occurred, or any other "news" can be automatically sent to a room, get a robot integrated into your text chat tool to send it. This saves wasted time for the person performing the "disturbed" role.

Most text chat tools also have "slash" commands or other ways of directing a question or instruction to a robot. These are evolving into tools that understand natural language and will be one of the most significant and disruptive changes to the way we "do business" over the next ten years.


Skype and Lotus Notes don't do a very good job on any of the requirements listed above. Consumer products (such as WhatsApp) are almost perfectly designed to do the opposite of what's required. WeChat (common in China) stands slightly above in that at least it has an API for bots.

The up-and-coming text chat tool is a program called "Slack", although Atlassian's Hipchat is a little more mature and is better integrated with the Atlassian suite of Confluence and Jira.

Unlike most of the tools I've written about in this series, the choice of text chat tool really has to be done at a company level. It is difficult for a team leader or individual contributor to drive the adoption from the grassroots up; generally it's an IT decision about which tool to use, and then a culture change (from top management) to push its usage. Fortunately, these text chat tools are extraordinarily cheap (the most expensive I've seen is $2 per month per user), and most have some kind of free plan that is quite adequate. Also, there's a good chance that a software development group will already be using Hipchat, which means that adoption can grow organically from a starting base.

Outside of a few startups, text-chat is very rare. And also outside of a few startups, everything takes far longer than you expect it to and inter-team communication is painfully slow. It's not a co-incidence. We think this mess is normal, but it's just driven by the software we use to intermediate our communications.




The next post in the series will hopefully be next Tuesday.

I'm hoping to put this (and many other thoughts) together in a book (current working title: "Bimodal, Trimodal, Devops and Tossing it over the Fence: a better practices guide to supporting software") -- sign up for updates about the book here:  http://eepurl.com/bMYBC5

If you like this series -- and care about making organisations run better with better tools -- you'll probably find my automated estimator of effort and duration very interesting.


Greg Baker (
gregb@ifost.org.auis a consultant, author, developer and start-up advisor. His recent projects include a plug-in for Jira Service Desk which lets helpdesk staff tell their users how long a task will take and a wet-weather information system for school sports.

Tuesday, 16 February 2016

Draining the meeting bogs and how not to suffer from email overload (part 2)

This is the second in my series of blog posts about how we unknowingly often let our IT determine how we communicate, and what to do about it.

Teams need to communicate at three different speeds: Tomes, Task Tracking and Information Ping-pong. When we don't have the right IT support to for all three, things go wrong.

In this post, I'll talk about team communication by Tomes.

Tomes are documents that answer "Who are we? What do we do? How did we get here? Why are we doing this?"

These are often important things to say, but they aren't particularly urgent. They might or might not be read by people inside the team. Quite often they are read by people outside of the team, but often weeks or months after they were written.

Examples include:
  • Project decisions
  • Strategic plans
  • Frequently asked questions
  • Status reports for long, ongoing projects 
  • Minutes of meetings
  • Discussions about any of the above

These are often tedious to write, and the task is often given to the person with the most free time using whatever tools are at hand. Generally, this means either email or MS-Word documents. There are serious flaws with both of these. Writing emails just contributes to the email overload problem. Writing a Word document and putting into a fileshare (or on to Sharepoint) leads to more meetings because it won't get read. Emailing a Word document around (which is probably the most common approach) manages to be the worst of both worlds.

I've made an attempt at distilling the 5 requirements for a tool to support Tomes. If you can think of more, let me know.

NAME MENTIONING If someone is mentioned in a Tome they need to be notified automatically. This keeps everyone on the same page (virtually). A common miscommunication I've seen is where everyone was expecting a response or involvement from someone outside of the team -- but the individual was never aware that they were supposed to be doing anything. Automatic notification mitigates this.

TASK ASSIGNMENT AND DUE If a task is mentioned in a Tome, it should be easy to assign it to someone and to give it a due-by date. The task should automatically appear on that person's to-do list. Otherwise, it is too easy for the task to get lost.

LIVE TASK STATUS When someone marks off a task as completed from their to-do list, this should be reflected automatically in the Tome. If a task has a lifecycle (e.g. it has to be reviewed by another team before it can be marked off as "Done"), then the live status of the task should appear.

This saves time in meetings working out whether something has been done or not. One of the most frustrating meetings I can remember (and I've had a few) is was a pointless nothing-to-report meeting: everyone admitted in the meeting that they hadn't had time to do anything and that there was nothing to report. This could have been trivially seen by the project manager who convened the meeting if he had been able to see the lack of progress on the task list of the minutes of the previous meeting.

IN-PAGE COMMENTS Comments about the Tome should appear with the tome. It should be easy to discuss and have a conversation about any part of the Tome.

CHANGE TRACKING AND WATCHING It should be easy to change the Tome. If it is changed, then it should be easy to see what changed even though the Tome displays the latest version by default. It should be possible to register an interest in changes to the Tome and get notified when this happens.



These seem obvious, and seems like a reasonably complete list.

  • Email can do #1 (you can CC someone outside your team). If you are using Outlook you can drag the email into your todo list (and Gmail can do something roughly equivalent), so that satisfies #2. But #3 and #5 are impossible, and #4 by email is a recipe for giving everyone a full inbox that just gets dumped.
  • Word documents in a portal or fileshare do #1 very poorly; #2 is even worse. #3 is impossible, but does OK on #4 or #5.
  • Slightly more innovative companies will use Google Docs. The collaborators list can act as #1, and it does as well on #4 and #5 as a Word document does. #2 and #3 are sort-of possible if the tasks are put into a Google Sheets document instead, but it is rather fragile, complex to set up and means that there are multiple documents for the one event or topic.
  • Startups often use Confluence. It nails all 5 requirements very well.



So what usually happens?

In large, traditional organisations, I see teams with Word documents being emailed around by a project manager, a deluge of emails (CC'ed to everyone the project manager sent the project tracking document to) and then meetings at least once a week so that everyone can work out whether or not they were supposed to have done something, and why that probably wasn't a high priority anyway.

Smaller organisations manage better, although the ones that try to do this with Google Docs tend to be very chaotic, and have someone always complaining about how they really haven't got their procedures and processes sorted out yet.

We think this mess is normal, but it's just driven by the software we use to intermediate our communications.

The next post in the series will hopefully be next Tuesday.

I'm hoping to put this (and many other thoughts) together in a book (current working title: "Bimodal, Trimodal, Devops and Tossing it over the Fence: a better practices guide to supporting software") -- sign up for updates about the book here:  http://eepurl.com/bMYBC5

If you like this series -- and care about making organisations run better with better tools -- you'll probably find my automated estimator of effort and duration very interesting.


Greg Baker (
gregb@ifost.org.auis a consultant, author, developer and start-up advisor. His recent projects include a plug-in for Jira Service Desk which lets helpdesk staff tell their users how long a task will take and a wet-weather information system for school sports.

Tuesday, 9 February 2016

Draining the meeting bogs and how not to suffer from email overload (part 1)


Some meetings are important; sometimes face-to-face is the best way to work through an issue. And email is a necessary business tool. But in many of the organisations I work with, I've seen meetings and emails used as a crutch because their staff aren't given what they need in order to work more efficiently.

I blame IT for this, perhaps too harshly, but IT should be thinking both about how individuals communicate, and also about the requirements for teams to communicate.

In general, there are three broad ways that teams communicate:
  • With Tomes that answer "Who are we? What do we do? How did we get here? Why are we doing this?"
  • Using different ways to say "We're working on it"
  • By playing Information Ping-pong


To be efficient, it's important that staff from outside the team can "lurk" (watch what is going on) without engaging the team across all three methods.

If other staff can't lurk -- they will either email you or ask for a meeting.

What I see all too often is desperate staff, who are over-worked because they are forced to use email and meetings -- tools which are very ill-suited to all three speeds of communication.

I'll discuss each of them in follow-up blog posts; I'll schedule them for Tuesday each week unless something else more interesting crops up.

I'm hoping to put this together in a book (current working title: "Bimodal, Trimodal, Devops and Tossing it over the Fence: a better practices guide to supporting software") -- sign up for updates about the book here:  http://eepurl.com/bMYBC5

If you like this series -- and care about making organisations run better with better tools -- you'll probably find my automated estimator of effort and duration very interesting.


Greg Baker (gregb@ifost.org.auis a consultant, author, developer and start-up advisor. His recent projects include a plug-in for Jira Service Desk which lets helpdesk staff tell their users how long a task will take and a wet-weather information system for school sports.

Thursday, 10 December 2015

Celebratory 100th blog post - what people have actually been reading, and what freebies and junkets are on offer

This is the 100th blog post, and the counter of page views is about to tick over 50,000. Thank you for your readership!

According to Google's infallible stats counters, here's what most people have been reading on this blog:

POEMS At the end of a day of being a computer nerd, you need something that will make you laugh (or at least smile) and also make you look cultured among your friends. Nobody else writes poetry about nuclear physics or time travel, so if you want to get that "I'm so hip" feel, you really should buy a copy of When Medusa went on Chatroulette for $3 (more or less, depending on your country of origin).

NAVIGATOR - If you run Data Protector then you will definitely get some value out of the cloud-hosted Navigator trial. You can get reports like "what virtual machines are not getting backed up?" and "how big will my backups be next year?" -- stuff that makes you look like the storage genius guru (which you probably are anyway, but this just makes it easier to prove it).

HOW LONG WILL IT TAKE - If you are tracking your work in Atlassian's fabulous JIRA task tracking system, then try out my free plug-in (x.ifost.org.au/aeed) which can predict how long tasks will take to complete. And if you are not using JIRA, then convince everyone to throw out whatever you are using and switch to JIRA because it's an order of magnitude cheaper, and also easier to support.

TRAINING COURSES - You can now buy training online from store.data-protector.net -- and it appears that it's 10-20% cheaper than buying from HPE directly in most countries. There are options for instructor-led, self-paced, over-the-internet and e-learning modules.

SUPPORT CONTRACTS -  Just email your support contract before its renewal to gregb@ifost.org.au and I'll look at it and figure out a way to make it cheaper for you.

BOOKS - If you are just learning Data Protector, then buy one of my books on Data Protector (available in Kindle, PDF and hardback). They are all under $10; you can hide them in an expense report and no-one will ever know.

Tuesday, 24 November 2015

Three rules of thumb that say "this job is going to take ages"

As you might know, I've written a robot called AEED which looks at work tickets and predicts how long that job is going to take. I designed it to solve the black-hole problem of service desks: instead of "we'll get back to you", it's "your request will take around 4 days". If you happen to run JIRA in the Atlassian cloud (here's the marketplace link) or HP Service Manager it's definitely worth a look.

As it turns out, it's also giving surprisingly good predictions for software development and other kinds of projects too.

Anyway, some fun: now I've got some comprehensive data across a good number of organisations.

Rule of Thumb #1: Each hand-off adds about a week. 


Initially I didn't believe this: whenever I've done any sysadmin work if someone ever assigned a ticket to me and it's not something I could do anything about, I would just pass it on to someone more appropriate as soon as I saw it -- a few minutes at most.

But that's not quite it, is it? The ticket sits in your queue for a while before you get to it. Then you second guess yourself to wonder whether it really might be something you are supposed to handle and waste a bit of time investigating, during which time you will get interrupted by something more high priority, and so on. Then you might need some time to research who it is supposed to go to.

I've seen numbers as low as 4 days per re-assignment in some organisations, stretching up to two weeks for others.

What percentage of tickets that have been re-assigned in the past are correctly re-assigned this time?


Whenever an organisation gets bigger than Dunbar's Number, the mis-assignment rate starts shooting up (as can get re-assigned absurdly large numbers of times, as in the chart above). Which makes sense: no longer does anyone know what everyone is doing. (I've got a long-ish presentation about this).

Implications: 

  • Do you know exactly who is going to do this task? If not, add a week at least.
  • Is your organisation so large that it takes time just to find the right person? If so, sign-up to the next beta of AEED. 


Rule of Thumb #2: Meetings slow things down.



I didn't expect that just having the word "MEETING" appearing in a JIRA ticket would be a signal for a long duration. But, wow does it ever make a difference: 8-10 standard deviations of difference in how long the ticket will take!

The actual effect is often only a few days extra, but it's a very, very definite effect.

Some people might interpret this as saying "don't have meetings, meetings are bad". I don't think this is what is going on here; if a customer or tech is writing a work ticket and mentions a meeting, it probably means that the issue really can't be resolved without a meeting. If an INTP or INTJ computer geek says that a meeting is required, it's very unlikely that they are having a meeting for the sake of having a meeting.

But, any meeting has to fit into everyone's work schedules, and that can introduce delays.

Implications:

  • Do you need to meet anyone in order to complete this job? Add a few days.
  • This is the kind of thing that AEED tells you. Install it now while we're still in beta and not charging for it.

Rule of Thumb #3. Not much gets done on Wednesday

I was digging for data that showed that Friday afternoon is the worst time to raise a helpdesk ticket because everyone would be slacking off. It turns out not to be the case at all: in all the organisations I've got, everyone is more-or-less conscientious. Perhaps they make a Friday afternoon ticket a top priority for Monday, or otherwise make up for lost time.

But Wednesday? It's not true in every organisation, but it was there for quite a few.

I can only guess why. My guess it that people have less time on Wednesdays because they are called into meetings more. Nobody would organise a meeting for a Friday afternoon if they can avoid it; likewise Monday morning is often avoided. But need to call a meeting on a Wednesday? No problem!

Does anyone have any data on this? Maybe Meekan or x.ai might know, or anyone from the Google Calendar team? Or does anyone have any good suggestions of the Wednesday effect?

Implications:

  • Assume you have a 4.5 day week. Pretend that Wednesday is a half day, because that's about all the work you will get done on it.


Greg Baker (gregb@ifost.org.auis a consultant, author, developer and start-up advisor. His recent projects include aplug-in for Jira Service Desk which lets helpdesk staff tell their users how long a task will take and a wet-weather information system for school sports.

Wednesday, 11 November 2015

An Atlassian story

A little while back I was talking with two Daves.
  • Dave #1 is the CIO of a college / micro-university
  • Dave #2 is on the board of a small airline. 
Because I tend to get involved in non-traditional projects, they often ask me what I'm working on, probably out of amusement more than anything else. At the time I was building a service catalog for Atlassian (and now I have an awesome plug-in on their marketplace).

Neither had any idea of who Atlassian is or what it does, which was no surprise to me. I don't quite understand why an Australian company with $200m+ in revenue and a market cap in the billions which isn't a miner, telco or bank isn't memorable.

Still, the tools Atlassian makes are mostly used by software developers, and my circle of friends and acquaintances doesn't include many coders so "Atlassian makes the software to help people make software" isn't a good way of describing what they do.

Other than a brief stint at Google, I haven't been a full-time employee or even long-term contractor in any normal company this century, so instead I talked about what's unusual and different at Atlassian based on all the other organisations I've worked with. I talked about how the very common assumptions about the technologies to co-ordinate a business are quite different here.

Type of communication Corporate default
(aka "what most companies do")
What is generally done at Atlassian
Individual-to-individual Email or talk over coffee. HipChat @ the individual in a room related to the topic
Individual-to-group Teleconference / webinar, for all matters big or small. A comment in the HipChat room. Or sometimes Google Hangouts and HipChat video conference if it's something long and important.
Reporting (project status, financials) Excel document or similar Confluence status page or JIRA board
Proposal Word document or Powerpoint presentation Confluence page
Feedback on proposals Private conversation with the person who proposed it, or maybe on another forum page somewhere else on their sharepoint portal. The discussion in the comments section of the confluence page.

And yeah, my secret superpower is to be able to narrate HTML tables in speakable form. There was a lot of "on the one hand... on the other hand at Atlassian..."

Note that the Atlassian column is all public and searchable (in line with being an open company), and the "default corporate column" is not. Also, in the Atlassian column, you opt-in to the information source; in the "traditional company" column, the sender of the information chooses who to share it with.

Why is this interesting? Because the Atlassian technologies and the Atlassian way of doing things is an immune system against office politics.

In order to get really nasty office politics, you really need an information asymmetry: managers need to be able to withhold information from other managers in order to get pet projects and favourite people promoted, and for others to be dragged down by releasing information at the worst possible moment when the other party can't prepare for it. (Experience and being middle-aged: you see far too much which you wish you hadn't.) When information is shared only with the people you choose to share it with, that's easy to do.

At Atlassian, it's not quite like that. Sure, there are still arguments and disagreements. Sometimes there is jostling for position and disagreements about direction and there are people and projects we want to see happen. And not every bad idea dies as quickly as it should. But because other interested parties can opt-in as required for their needs it's a very level playing field and ---

And that's where Dave #1 cut me off, shook my hand and said, "Wow. Thank you. That's exactly it. That's EXACTLY it."

Dave #2 just nodded sagely. "Yes," he said, and then again more slowly: "yes."

Summary: Atlassian sell office-politics treatments.

Greg Baker (gregb@ifost.org.auis a consultant, author, developer and start-up advisor. His recent projects include a plug-in for Jira Service Desk which lets helpdesk staff tell their users how long a task will take and a wet-weather information system for school sports.

Saturday, 4 April 2015

Gen-Y businesses

Consider a company established by Gen-Y founders who have hired staff mostly younger than themselves -- selling heavily to Gen-Y customers: it makes for an interesting client when you are a paunchy and greying middle-aged consultant. With the curious feeling that I had stepped into a mirror universe, or a not-quite-done-right simulation of reality, I also felt a lot like the boss character from Atlassian's latest Hipchat videos only without the ability to summon my version of normality into existence.

In a week on-site, I think I saw two desk phones. One sat idle at reception, officially shared between three staff; the other was half-buried under a pile of cables in the customer support area. I presume they were both still functional, but I never heard them ring.

I noticed the buried phone while I was in their customer contact centre. I started asking about the way they handle logging of tickets from customers, and how there were some techniques to ease the time pressure to get things in place and dispatched while on the phone. (Yes, I still think that the technology behind Queckt deserves another chance.)

The manager looked at me a little confused and explained that that less than 0.001% of their customers call via telephone in a given week. It has to be a total emergency before a customer reverts to calling human-to-human. Their customer contact centre receives two to three calls per week. Presumably Gen-Y customers are so used to the idea that a call centre will be staffed either by a robot automaton or offshore resources that it simply never occurs to Gen-Y customers that there would be anything gained by a phone call.

You might therefore expect that mobile-to-mobile calls or SMS were common communication methods. Nope, I didn't notice this happening either. At a guess, there is an unwritten rule of etiquette that says that sending a message to someone's mobile is to direct a message to them personally, and would be inappropriate for work-related information.

It was text-chat everywhere,  in hundreds of topic-based chat-rooms which are kept forever for their historical record. There were some private chat messages to individuals to follow up on some minor point, but mostly it was text chat messages in rooms even when addressed to an individual. This led to surprisingly few repeats of information, and a weird multi-threaded conversation that spread across time and across people: "I scrolled back to find what she said..." and "Searched history: last month we were ..."

This led to a near-complete absence of email. I have never received so few emails on a project before. Email is predominantly used to send calendar invitations, which doesn't happen much because sit-down meetings are less common and only for more serious matters. Every meeting I was in this week had someone (sometimes several) taking copious notes on a laptop, because if everyone had taken a big chunk of time off from their work together to gather in a particular place, this was obviously an Important Event Which Had To Be Documented.

Pulling out my smartpen to record a session and write with ink on audio-linked notes gathered the same responses as a cute piece of steampunk technology in a cosplay would have. In most of my other clients, I'll have a manager or two itching to find out where to get them the moment I turn it on. Here: a polite nod of acknowledgement.

I think there were five factors driving the importance and annotation of sit-down meetings.

  • My bias. If I was present, it was a meeting with an expensive outside consultant, so no surprises there that it would be taken more seriously.
  • The idea of taking notes on a laptop is something that many Gen-Y folks have been doing since high school, so it carries through to the workplace.
  • Gen-Y workers have grown up with their whole lives documented and recorded. Every birthday, grand final and graduation was captured at least on camera, and possibly on video. Baby boomers relying on memory alone for important events presumably seems like a bizarre collective amnesia to Gen-Y. To some extent, an unrecorded meeting might not quite feel real.
  • Ritualised, informal and short stand-up meetings were fairly wide-spread. There is a reason that Agile-methodology stand-up meetings get used -- they can be very effective.
  • Debate and consensus forming were done on-line -- very, very effectively.
Let me explain the significance of that last point. Management theorists have studied the process of reaching consensus in an organisation. There are hierarchic autocracies where decisions come from on high and the lower down in the chain you are, the less one's contribution can take effect. It leads to a deeply disenchanted workforce (as witnessed at IBM at the moment) but can mobilise resources at vast scale. There are democratic processes. There are "consultative change" processes where thoughts and feedback are gathered by specialist consultants to be assembled as an integrated whole. There are ad-hoc mechanisms such as email flamewars between middle managers until someone gives in.

Ultimately, such processes often reflect the organisations' origins or the fad of the day when the culture was created -- military, professional, creative and so on.

Gen-Y staff are used to discussions on web forums. They are used to wikis. It seems perfectly natural to put up a proposed strategy on a wiki page, and let the entire organisation debate on it. Those that care deeply about the issue will put up their arguments, and if the debate gets too intense, those that care less about it will slowly drop off by de-subscribing to notifications on the page. It's civil and yet nevertheless gets issues aired. 

Consensus may not be formed, so there is still a role for management, but for every decision, the "why did we do choose that path?" is ever so clearly documented.

In contrast, a typical Gen-Y employee with that sort of background might look at a sharepoint site where the content and comments are set up to be separate and think of it like an overhead projector for displaying transparencies -- something that clearly has a place and serves a definite purpose, but for which that place or purpose belongs in a museum or second-hand shop rather than being a useful part of day-to-day existence. 

Shared drives are a real curiosity where a sysadmin who had experience in setting them up on dedicated hardware gets rewarded in praise for their breadth of knowledge.

Overall, the most striking factor coming out of the use of wikis and group text-chat instead of shared-drives and private emails was the deep and pervasive honesty about everything. It's hard to obfuscate or hide when any commentator might make a reference to that dark secret you don't want to let out. Want to know which teams are making their targets and which are not? It's not hard to find out.

Lest I seem too positive, there are down-sides to youthful remastering. Lack of perspective and experience in a young company is normal, but it is still amusing to hear a senior manager explaining that keeping Fortune 500 customers happy is a really good idea and why large enterprises can be a good source of steady income and growth. Also I worry (probably unnecessarily so) about implicit sexism and ageism when the vast majority of staff have not yet started a family.


Business continuity is a mixed bag. There are only weak dependencies on particular sites -- if a building becomes unavailable, it would only be an inconvenience. Teams would re-form via text chat groups fairly fluidly. There probably isn't any crucial data living in a file server that would need to be restored in a hurry. Accounting and other functions are all in the cloud and therefore somewhat insulated from any local disasters. But that very fluidity that makes recovery so natural means that it would be very hard to tell what unexpected consequences there might be.


I'm painting with a broad brush -- not everyone is young, there are still emails being sent (occasionally), phones would be ringing somewhere, there are bound to be many unimportant meetings, and even some important meetings being left completely unrecorded, and sadly, there are probably lies being told as well. So it's not universal or absolute, but the approach to and adoption of technology does drive culture in a certain direction.

It's certainly been one of the most interesting clients I've worked with. I can't help shaking the feeling: if I found working in a predominantly Gen-Y company noticeably different, what are Gen-Y workers experiencing when they work for companies that are predominantly baby-boomer or Gen-X? And am I seeing an amusing niche event that has just happened for this time and this place or have I been seeing the future of work for everyone?

Greg Baker (gregb@ifost.org.auis a consultant, author, developer and start-up advisor. His recent projects include aplug-in for Jira Service Desk which lets helpdesk staff tell their users how long a task will take and a wet-weather information system for school sports.