Tuesday 1 November 2016

Getting the BAs to Step off the Ladder

We were in a planning meeting and we seem to have a lot of inventory. The BAs seem very keen to talk about things in the medium horizon but we as developers are crying out for the stories to be curated to a standard that makes them truly ready for development. It seems that increasingly they are concerned with maintaining the medium and long term inventory of the backlog to satisfy some high level business questions about what the program may deliver before the end of this year and in the first quarter of next year. In talking this over with my colleague, he drew a picture that was describing the problem (he just sighed and said "anyway, rant over" by the way). This is the picture we came up with:



On the top left we see the backlog depicted as a series of things with BAs looking after it. At the bottom we see a group of developers (us) with a pipeline of work that needs feeding. There are a load of stories that are (nearly) ready for dev on the left but somebody needs to put then on that pipeline to us. We need a way to make the BAs do this but they are up there in the top left. So this got us thinking "why are they up there"? And, more importantly for us now, "how do we get them off the ladder?" It looks like a ladder up there, that backlog!

We think this is a symptom of the fact that our program has some aggressive delivery promises in place and that the program management is being handed over to a different business unit. The BAs and PMs are trying to answer the question of "what can we do in the next 3 months" There may be other things in play as well, in particular we think that some of the BAs are more comfortable "on the ladder" and may want some of the other BAs to occupy the space down below near the slide.

So how do we get the BAs to come down off the ladder and help the stories into the development pipeline? It feels like the answer is to own verticals in the analysis space and own stories from the ladder (the backlog and the medium level horizon) right down to it being properly ready for development. They need to be willing to get more down and dirty with what the implementation pitfalls may be and what information we will need to start the development. Too many of them seem to be comfortable up their ladder. This may be a hangover from waterfall days, it may be that previously, before we taught them Agile, that they could throw things over the wall and somebody else (A PM maybe?) would pick it up and complete the analysis.

So I guess the conclusion is that devs / tech leads need to climb up the ladder and BAs need to descend toward the slide. We all need to be willing to overlap our function as much as possible so that we don't throw things over the wall, however small that wall may be. The key is that we should be willing to overlap our influence so that the pipeline can continue to run smoothly without any blockages.

Finally, here is another diagram we drew while I was typing this. We have the "ladder of backlog" above the "slide of development". Devs and QAs live on the slide, Product owners and BAs live on the ladder. Tech leads and devs need to live in both and be prepared to push upwards. Product Owners live near the top of the ladder, BAs need to occupy the whole ladder and some of the slide.


With thanks to my colleague Matt Belcher for the diagrams and the discussion that we were having as I typed this.

Wednesday 26 October 2016

Convincing a client to invest in the build pipeline with DAFF

The Problem

Recently we were working for a client whose IT function was displaying several common pathologies. They had (still have but it is improving as we are still there!) the classic inverted test pyramid, about 10 test teams, massive product backlogs that only ever grow, mistrust of their IT function in the wider company, no real understanding of continuous delivery... All of this adds up to a state of development stasis. Releases can take months and rarely add new value. Even when new features are added the uptake around the company is low to non existent.

The most disturbing aspect of this general dysfunction to me was the realisation that there are several different test teams comprising dozens of people who add little to no value to the company. Management has at some point created these layers of testers in response to poorly managed releases in an attempt to restore confidence in the release process. This has had the exact opposite effect as the release time has stretched out, more and more things go into a release, features and fixes are rushed in for fear of missing the release train and the situation only gets worse. Nobody has confidence in success therefore the test teams spend their working life not adding value to the process but ensuring that they and their immediate colleagues don't get blamed when it goes wrong.

They brought us in on an initially ill defined brief of "digital transformation". They had no clear idea of what that meant to them and it was pretty obvious to us from an early stage that a big part of our work was going to be not the software that we deliver but how we deliver it and whether we can use that example to help them change their processes and culture.

We wanted to run a session in the inception that would make them realise that they need to embrace a devops culture with regular releases, effective pipelines and rapid response to issues. We didn't have any experience of running such a session in the past because none of us have ever had to. Most businesses accept as fact that CI and build pipelines are a good thing.

We started with the notion that we wanted them to come to the "correct" conclusions. The client's staff had at various time expressed various emotions such as exasperation, scepticism and doYouNotThingWeveTriedThisIsm. So we were acutely aware that we wanted to have given the impression that whatever conclusions were reached were reached by them, not imposed by us.

The DAFF Loop

As ever, we deployed the magic whiteboard on the walls. We labelled 5 magic whiteboards with our 5 chosen bad events:

  • Run-time exception (this had to be explained to some. I would name this differently if I did this again)
  • Bad check-in
  • Performance bottleneck
  • Site unavailable
  • Customer dissatisfaction with digital product

We surrounded the scenario label with a circle (borrowing heavily from OODA) and labelled this with Discover - Analyse - Fix - Finalise (which I'll call the DAFF loop).

Discover-Analyse-Fix-Finalise for Bad Check In
We then armed our workshop attendees with orange stickies and asked them to describe parts of the current process by way of attaching stickies near the appropriate part of the DAFF loop. After a suitable discussion of the current processes, and I can't remember if we rotated groups or left them where they were, we asked them to add problems with the current process in red stickies. Finally the punch line was to use yellow stickies to come up with suggestions on how to improve the current process.

How did it End?

We were hoping that the discussion we had after each stage and the suggestions we arrived at would coalesce into some or all of continuous integration pipelines, build monitors production monitors etc, build it run it devops style mentality, customer focus and all of the other goodies that we expect in a well run organisation. Happily we found that the exercise worked out almost exactly as we'd wanted it to. With minimal prompting and poking from our team of consultants our client independently from us came to the conclusion that all of the above, and other great stuff besides, are good aspirations for their IT function to aspire to.

Conclusion

Obviously I don't have a massive sample size from which to draw experience but we did get a 100% success rate on guiding our client to the desirable conclusions. A very satisfying afternoon at the office. If I'm ever involved in a similar exercise I'll update this post.



Technical Debt Quadrant

I was reading Martin Fowler's article on the Technical Debt Quadrant, first published in 2009 recently. In it he talks about how technical debt as a term is powerful because it uses a metaphor, money, that all businesses understand.

There are, as ever, two dimensions in the quadrant - deliberate v accidental, reckless v prudent. It is easy as a software professional to see how each of the 4 types of debt in the quadrant can come about. The only section of the quadrant that requires a little thought is "accidental prudent" debt. This is explained as debt which arises because of the nature of development. We can't always know what the best design decisions are at the time that we are forced to make them. We can defer decisions until the last responsible moment but we often still will not have all of the information that we later have. Thus, prudent decisions can turn out to be sub-optimal. So at some point in the evolution of a project we can have technical debt that was prudently accumulated (because we made the best design decision given the information available at the time we were forced to make it) but accidental (because we didn't think we were taking on debt at the time).

Now, my reason for this post is the way that Martin's article ends. Essentially he says that this type of debt is difficult to explain to business because the analogy between technical debt and monetary debt breaks down. I thought about this a little while and I put forward the following analogy. In this day and age (and I accept that Martin's article was originally written in 2009, before bail-ins were a thing) it is possible to finance your company in a sensible fashion only for a financial institute to demand a bail in from its depositors (treating them as investors). Maybe I'm labouring the point a bit but would this not be a case where you incur a financial debt, or at least a liability, as a result of a prudent earlier decision?

Friday 5 August 2016

Finding a Project for the language

My biggest reason for joining ThoughtWorks was to learn new stuff. I wanted to do something different after 15 years of Microsoft tools. As well as being forced to learn new stuff in order to deliver for our clients it has been great to be in an atmosphere where everybody is learning, and keen to learn, all the time. The culture of continual improvement is great to be a part of and I've never purchased, and read, so many non-fiction books as I have in the last year. Many of them have been business focused in keeping with my goal to understand our clients and our potential clients better but many more have had a technical focus.

My last project was almost entirely JavaScript based, using React and Flux. That gave me a great grounding (though I am far from expert) in those technologies. I certainly understand, and respect, JavaScript much more than I did a year ago. I've moved on from the impression that I had at my previous employer that JavaScript exists only for html / UI developers to add functionality to web pages in a way that completely breaks any semblance of code quality and testability in the solution.

My current project, a much bigger undertaking than the previous one, uses Java for the back ends, a UI framework based on Backbone, various different scripting techniques in the pipeline (which we are trying to rationalise into a set of Ruby scripts) and one or two other delightful bits and pieces such as Dropwizard and Red Hat's OpenShift. So here I'm learning about what banks and big business like to call "enterprise" languages and tools.

What I've really wanted to get my teeth into from the start is something really new. Java is pretty similar to C# in looks and capability. JavaScript is a functional language but it seems to want to follow rules of syntax that make it look like C, C# and everything else. Certainly for me, the visual similarity of JavaScript to those other languages didn't help at all in understanding that JavaScript is a functional language and hence very, very different from Java and the C families. So what should I be looking at? Something that is functional, (hopefully) cool and looks very different from those other familiar things. So, ever since I spent time waiting to be on a project, I've been wanting to try Clojure seriously.

So I've bought the books, I've worked through the online exercises and I've tried to get something going to enable me to learn Clojure seriously. Unfortunately, as with many other things, I find it very hard to carry on with the learning beyond the initial excitement if I don't have some kind of project to work on. Particularly if, as is now the case, the geography of the gig is such that I have very little time in the evenings to do anything other than have dinner, put the girls to bed and fall asleep.

Like the fabled London buses, you wait for a year to one come and then two (should it be three, should I be expecting another?) come along at the same time.

First the more business oriented idea. On my current gig I'm working for a financial institution. One thing that financial institutions do is lend money to people. When you lend money to somebody you need to make a decision on how risky that person is. This helps you to decide firstly whether you want to lend them money at all and secondly what level of interest you should charge that person. In the dim and distant past such decisions were taken on the basis of face to face interviews, recommendations and other very subjective decision points. Nowadays there are a number of rules, collectively called a strategy, that execute against relevant data. Such rules are created, stored and executed by a decisioning system.

Given that strategies and how they are executed are so important to financial institutions, you would think that the software in the market to support this process would be of very high quality. It isn't. It is nothing short of dreadful. I won't mention the product, or the manufacturer, because I don't want to be sued, but suffice it to say it would not be hard to do a much better job. This seems like a perfect candidate for an open source project using Clojure. I wrote a simple decision system a few years ago in C# with a SQL database holding the metadata for the rules. It strikes me that a functional language with a non-relational data store would be a much better fit for these requirements. I haven't kicked this project off yet but watch this space.

And so to the more fun filled project. I took up running a little over two years ago now and a big part of modern running is GPS watches for recording all your training sessions. My first one was a basic Garmin unit and I replaced that with a more sophisticated TomTom last year. The functions and features of the watches themselves is out of scope here (and both have their unique frustrations). The scope of this idea is the website that is used by each to display the workouts.

I have so far used Garmin's website, TomTom's website and Strava (independent of any manufacturer) for my analysis. Most serious runners and cyclists have a presence on Strava, it is an analysis tool as well as a social network for people who run and cycle. The big problem with it is that all the best features are premium. The problem with the watch manufacturer sites is that they aren't all that good. They already have your money obviously. So after a conversation with a running club mate in which he told me he had considered parsing the gpx files himself I thought, why not? I looked at the file, a very simple xml format and realised it is very possible. So just a little open source thing to build a front end for analysing running data in the form of gpx files. Not so obviously a Clojure application but more fun that the other one.

We'll see which one gets usable first. I'm not holding my breath but at least now, with tangible goals beyond learning for learning's sake, I think I have a chance.