Search This Blog

Tuesday, 14 March 2017

Going off-line for Lent, some nostalgia and a request for software development teams

I gave up always-on connectivity for Lent.

Well, truth be told, that wasn’t quite how I planned it. It just happened, and it wasn't for the whole of Lent. On my most recent trip out-of-town I was worried about theft and damage more than I normally am. Given that a "normal" business trip for me often involves third world cities with 90% unemployment and an hourly murder rate, you can guess what horror I was dealing with: teenagers.

So this is what I took:



That's a Nokia 110 and a very old IBM laptop running Linux. The laptop did have Wifi, but no bluetooth. The Nokia had a very basic bluetooth, but has no capability to tether a laptop for internet access.

I'm not a luddite so I set myself up with the best I could using the hardware available. That's a modern Linux distribution -- which in itself is interesting: no one would try doing serious work on a Windows 98 system, which is probably what that laptop once ran.

Between bouts of nostalgia, I paused for reflection: roughly 15-20 years ago I was running something very similar to this kind of environment, and I was often to be found programming in the lobby of some hotel after delivering a tech class in some distant city. What have we gained, and what have we lost? 


What was difficult

Not being online was a moderate challenge.

  • All my accounting and business systems are SaaS, so I couldn't do anything about the backlog of bank reconciliations that I need to catch up on.
  • Navigation was a problem. It's been a long time since I've navigated without GPS and it took a while to get used to it.
  • I couldn't look up the solution to any programming problem, so I defaulted to being very conservative in what I worked with: just the stuff I knew well enough that I wouldn't need to look anything up.
  • No Slack messaging.  I came back home to a Slack constellation of red numbers on just about every channel.  I'm running a mid-sized data science / AI class at the moment and a large number of students had homework questions and submissions.

Perhaps a bit of better planning (both short-term and long-term) would have resolved these problems. Maybe if I'd chosen GnuCash or OpenERP instead of Saasu accounting wouldn't have been a problem. I could have carried around a physical GPS device, or taken some cheap device that could run google maps offline. I could have cached lots of documentation (as I used to do), but a lot of websites and tutorials don't lend themselves that very easily. I could have installed a Slack client, but with long round trip delays it would have been functionally equivalent to email. 

What surprised me is what I struggled with the most. Photography was terrible. I’m not a serious photographer, but I take quite a few pictures on my usual phone, which backs them photos up automatically after which they get turned into panoramas and stories and so on. But the Nokia camera has awful resolution, and I don't even have a way of getting the pictures off it. We have raised our expectations of acceptable photos and videos very quickly in the last decade. This doesn't seem to be slowing down, so what sort of cameras will we be using in 2025? 

Then the other big problem was ergonomic.  The laptop was clunky and heavy, it didn't fit in my backpack very well and the battery life was terrible. It was probably mediocre at the time, but between the decay of the lithium battery and our increased expectations of what is normal, I found it frustrating. I was chained to the wall. I couldn't move around or choose to sit outside to work for a while.

The Nokia had the opposite problem. It was so small I kept thinking I'd lost my phone, not realising that it was still in my pocket.


Where the internet has made no difference

By running a modern Linux distribution, I took advantage of Linux's heritage, steeped in a world where being off-line was normal, and internet connectivity was brief: a world where we aren't bound to monthly subscription SaaS services.
  • Instead of Google Docs, I wrote text in emacs. I could have used something more normal (e.g. LibreOffice; or even run Microsoft Office in Wine on Linux) but why bother? Real-time collaboration would have been a problem of course, but the vast majority of documents I work on have one author only.
  • Email and backups were batch-mode: I was able to get to wif intermittently during the trip (e.g. at the airport). I suppose pre-wifi this would have been like the times that I dialed up to get online. The IBM laptop did actually have a built-in modem, so I could have done a full nostalgia trip and listened to it squawk if I'd known of any ISP left in the country who still offered dial-up services.
  • I just ran git push less frequently than I normally would, but everything else in git works as well off-line as it does online.
  • All my data science tools were there on my laptop. The night before I left I'd run pip install jupyter ; pip install scikit-learn and it had finished by the time I packed my laptop up. We think of big data, AI and machine learning as modern things that require giant server farms, but most of the prototyping and testing can be done on sample datasets that are small enough to fit in even quite an ancient laptop.
  • My wife called using a phone number instead of a WhatsApp contact. Skype redirects to my mobile number anyway, so no-one would have noticed anything if they had tried to Skype me.


Hearken to a simpler time


I'm not being rose-eyed for the past: I'm aware that there were some serious limitations. But what I found fascinating is that there were some advantages.


  • I slept better. There was no point in checking my phone for email before I went to bed. Maybe the Nokia 110 has a snake game or something, but all I did was just set an alarm. Many hours later, the alarm rang. Then I got up. That was the entirety of my interactions with the phone in bed.
  • Focus was easy: I wrote more text in one afternoon than I have done in weeks, possibly more than I've done this year. There were no interruptions. No email, no Slack, no chance to waste time browsing reddit or Hacker News.
  • There was so much spare time! At the end of the day I read, uninterrupted.
  • Breaks were real breaks. I had to put my laptop down, because it couldn't go anywhere away from AC power for very long. I would walk around and get some fresh air. I watched some children play hide-and-seek for a few minutes because I couldn't bury my face in my phone. Well, I could, but it was pointless.  There was nothing there.


What we need to resolve once and for all

This was all extremely illuminating given my most recent data science project which was for a famous company that runs a lot of software development projects.

I analysed around 200 projects and identified some factors that can predict when a project is going to miss deadlines. I've done this before for other customers -- I've got it mostly automated for JIRA and it drives the smarts in http://www.queckt.com/ if you want to take a look at it.

In most organisations there is one extremely clear predictive factor associated with missed deadlines: the word "meet" or "meetings" appearing in a JIRA ticket. But this company runs development fully-remote. You don't ever meet in person: it's all communications via Slack, or teleconference or whatever else the team wants to use. So the usual "meeting" factor predicted nothing. It didn't occur enough to be terribly useful anyway.

But what did predict a deadline in peril on a fully-remote project was the density of Slack communication. If there is intense back-and-forth communication with large numbers of messages being exchanged with short deadlines between the development team, then don't set your hopes on the next release being on time.

At the time, my interpretation of this was simply that there must be a lot of confusion about what's required and that a lot of communication is going on trying to resolve it.

But now I'm wondering: what if the Slack communication is not just a proxy for confusion? What if it is causative? When I couldn't interrupt myself, nor be interrupted, I was far more productive. Programming is a cerebral activity -- even more than writing -- usually done alone: is it better to take long periods of uninterrupted thought -- even if it means reducing communication with team mates?

This is not just an academic question. One of these two things must be true and the implication is obvious:

  • Instant messaging and communication is a net benefit to software development, or at least not particularly harmful. If we have to mandate instant messaging, should we?
  • Or, instant messaging and always-on communication in a development team is a bad idea. If we have to ban instant messaging from our dev teams in order to get the best out of them, should we?


So let me announce here a project: I would like to measure this. I would like to create a distraction index measure, derived from corporate emails, instant messaging, and so on. Then I'd like to see how well the different distractions correlate with late projects, delays, milestone slippages, etc.

If you are in a company that has a good number of software projects on the go, and it's worth a bit of money to you to know how to optimise your developer's time, I'd like to talk to you. I've got code to analyse a bunch of other useful predictors too -- such as the language used in git commit messages; the topics used in your JIRA tickets; and numerous others -- so whatever happens you'll get something worthwhile out of this.

Obviously, if you are working for Slack, or Atlassian or another instant messaging company, and you'd like to know your impact for better or for worse on your customers, I'd really like to talk to you as well.

Get in touch with me here: [email protected]

Monday, 16 January 2017

Farewell to cold winters, and hello endless summer in Sydney

I was preparing some materials for this intro to python workshop and wanted to have some interesting data. We're having a heatwave in Sydney at the moment, where the temperatures at night are still stiflingly hot and sleeping is difficult. So I thought something about hot nights and memories of cold days might be nice.

Sydney "feels like" winter when the maximum temperature is below 20. It means you have the heater on (or it means I need to light a fire). It feels like summer when the minimum temperature is above 20;  you have to sleep with at least a fan but you can dive into the pool or sea any time of the day or night and it isn't uncomfortable.

Here's how many winter-like days there were vs the number of summer-like days.



So is Sydney becoming the city of endless Summer? Who wants to take a bet on the first year when there are more summer-like days than winter-like days? I'm guessing around 2020.

I've saved the Jupyter notepad (including where to get the source data) here:
https://github.com/solresol/python-workshop/blob/master/curriculum/02-materials/code/farewell-winter-endless-summer.ipynb -- feel free to modify it and/or try it out for data in your city.



Wednesday, 7 December 2016

Artificial Intelligence (#AI) development in Sydney around #Atlassian and #JIRA

Well, it's boastful to say it, but we just received a nice little certificate from Craig Laundy (assistant federal minister for innovation) and Victor Dominello (NSW state minister for innovation). It says "Best Industry Application of AI/Cognitive" for the Automated Estimator of Effort and Duration.

Actually, we were just highly commended, and it was stratejos that won, but what is interesting about all this is: the whole AI/Cognitive category just went to artificial intelligence JIRA plug-ins.

Firstly, Atlassian has won the enterprise. When 100% of the top-tier startups targeting large organisations are developing for your platform exclusively, it's only a matter of time.

Secondly, AI is hot technology in Sydney with political capital. We think of Silicon Valley as being the centre of the universe for this, but I've never seen US Federal and Californian state senators getting together to express their commitment as we saw in this award.

Thirdly, this means you really should try out AEED: http://www.queckt.com/

Thursday, 29 September 2016

DataProtector technical workshop in New Zealand


Email [email protected] if you are interested. It's on next week.

Date: Wednesday, 5 October 2016
Time: 8.30am to 1.00pm

Location:  Hewlett Packard Enterprise Office
Level 4, 22 Viaduct Harbour Avenue, Auckland 1011

Brandon and Paul Carapetis will be talking about:
  • New features from the latest versions of Data Protector
  • Integration with VMware, Hyper-V, and HPE hardware: 3PAR and StoreOnce
  • Road map for the year ahead
  • Introduction to New Related Products: Backup Navigator, Connected, Storage Optimizer, and VM Explorer

Thursday, 22 September 2016

Really proud of my students -- machine learning on corporate websites

Amanda wanted to know what factors influence certain web sites that people visit. But her exploratory analysis found out the secrets of big software companies.

She used a sitemap crawler to look at the documentation of the usual names: Microsoft, Symantec, Dropbox, Intuit, Atlassian, CA, Trello, Github, Adobe, Autodesk, Oracle and some others. She looked at the number of words on each page, the number of links and various other measures and then clustered the results.

Like the best of all data science projects, the results are obvious, but only in retrospect.

Microsoft, Symantec and Dropbox are all companies whose primary focus is on serving non-technical end-users who aren’t particularly interested in IT or computers. They clustered into a group with similar kinds of documentation.

CA, Trello and Github primarily focus on technical end-users: programmers, sysadmins, software project managers. Their documentation cluster together in similarity. Intuit and Atlassian were similar; Adobe and Oracle clustered together.

Really interestingly, it’s possible to derive measures of the structural complexity of the company. Microsoft is a large organisation with silos inside silos. It can take an enormous number of clicks to get from the default documentation landing page until you get to a “typical” target page. Atlassian prides itself on its tight teamwork and its ability to bring people together from all parts of the organisation. They had the shortest path of any documentation site.

But this wasn’t what Amanda was really after: she wanted to know whether she could predict engagement: whether people would read a page on a documentation site or just skim over it. She took each website that she could get data for and deduced how long it would take to read the page (based on the number of words), how long people were actually spending on the page, the number of in-links and a few other useful categories (e.g. what sort of document it was). She created a decision tree model and was able to explain 86% of the variance in engagement.

Interesting result: there was little relationship between the number of hyperlinks that linked to a site and how much traffic it received. Since the number of links strongly influence a site’s pagerank in Google’s search algorithms, this is deeply surprising.

There was more to her project (some of which can’t be shared because it is company confidential), but just taking what I’ve described above, there are numerous useful applications:
  • Do you need to analyse your competition’s internal organisational structure? Or see how much has changed in your organisation in the months after an internal reorg?
  • Is your company’s website odd compared to other websites in your industry?
  • We can use Google Analytics and see what pages people spend time on, and which links they click on, but do you want to know why they are doing that? You know the search terms your visitors used, but what is it that they are interested in finding?

Wednesday, 14 September 2016

Really proud of my students - AI analysis of reviews

Sam wants to know what movies are worth watching, so he analysed 25,000 movie reviews. This is a tough natural language processing problem, because each movie only has a small number of reviews (less than 30). It’s nowhere near enough for a deep learning approach to work, so he had to identify and synthesise features himself.

He used BeautifulSoup to pull out some of the HTML structure from the reviews, and then made extensive use of the Python NTLK library.

The bag-of-words model (ignoring grammar, structure or position) worked reasonably well. A naive Bayesian model performed quite well -- as would be expected -- as did a decision tree model, but there was enough noise that a logistic regression won out, getting the review sentiment right 85% of the time. He evaluated all of his models with F1, AUC and precision-recall. He used this to tweak the model a little and just nudge it a little higher.

A logistic regression over a bag-of-words essentially means that there we are assigning a score to each word in the English language (which might be a positive number, a negative number or even zero), and then adding up the scores for each word when it appears. If overall it adds up to a positive number, we count the review positive; if negative the reviewer didn’t like the movie.

He used the Python Scikit learn library (as do most of my students) to calculate the optimal score to assign to each English language word. Since the vocabulary he was working with was around 75,000 words (he didn’t do any stemming or synonym-based simplication) this ran for around 2 days on his laptop before coming up with an answer.

Interestingly, the word “good” is useless as a predictor of whether a movie was good or not! It probably needs more investigation, but perhaps a smarter word grouping that picked up “not good” would help. Or maybe it fails to predict much because of reviews that say things like “while the acting was good, the plot was terrible”.

Sam found plenty of other words that weren’t very good predictors: movie, film, like, just and really. So he turned these into stopwords.

There are other natural language processing techniques that often produce good results, like simply measuring the length of the review, or measuring the lexical dispersion (the richness of vocabulary used). However, these were also ineffective.

What Sam found was a selection of words that, if they are present in a review, indicate that the movie was good. These are “excellent”, “perfect”, “superb”, “funniest” and interestingly: “refreshing”. And conversely, give a movie a miss if people are talking about “worst”, “waste”, “disappointment” and “disappointing”.

What else could this kind of analysis be applied to?
  • Do you want to know whether customers will return happier, or go elsewhere looking for something better? This is the kind of analysis that you can apply to your communications from customers (email, phone conversations, twitter comments) if you have sales information in your database.
  • Do you want to know what aspects of your products your customers value? If you can get them to write reviews of your products, you can do this kind of natural language processing on them and you will see what your customers talk about when they like your products.