Thursday, 4 June 2020

Estimates v Priorities

Why Do You NEED That Estimate?

I have previously written, and talked, about my discomfort whenever I, or a team I am involved in, is asked to provide estimates. My reasons for discomfort have previously been chiefly:
  • How will this estimate be used? In my experience, no matter how much you emphasise that this is the best guess you can come up with today and your guess will become steadily more accurate, the date you guessed right at the start ends up becoming a pseudo contract to be used in a blame storm later about why we (a) under delivered or (b) padded our estimate. Whilst I accept that the culture of the organisation will dictate how an estimate is used, my point is that it is important to understand whether your "estimate" might (or is likely to) evolve into a "commitment". If you think it is likely that this will happen, push back about being asked to estimate.
  • Why does the business thinks it needs an estimate in the first place? I have rarely seen a powerful argument as to why an accurate estimate is more valuable than breaking the backlog into its smallest valuable units and then doing them in order of (potentially constantly adjusting) priority.
I think there is a good additional point to be made here (thanks to my colleagues Chris Bimson and Matt Belcher for pointing this out when reading my draft). Before even considering the above questions it may well pay to ask "what is the question that the estimate might help to answer?" This is entirely consistent with one of my mantras which is "tell me the problem to which this is the proposed solution". In this case an estimate is, of course, part of some solution domain, so it follows that the person requesting it must have some higher level problem which they think an estimate will help to answer. It could be that there is a better answer to that question, whatever it is, which would make the request for an estimate go away. In a healthy culture I would at least expect the person requesting the estimate to engage in this conversation.

The Iron Triangle

In project management we often talk about the Iron Triangle. In the original version, it was assumed that quality was constant and any change in one of three constraints, time, cost and scope necessitates a change in the others. In other words you have a fixed "budget" across the related constraints. 

The version I usually refer to for software delivery says that given a fixed throughput (a constant team capacity) and an unvarying level of quality, you can either fix the scope or the required time for the work (assuming some kind of accurate capacity planning technique), but you cannot fix both. This is, of course, the problem that many deliveries encounter when they fix scope in advance and attempt to force a development team to "commit" to a delivery date. The Iron Triangle tells us that this can't be possible.

The CAP Theorem

An equally well known triangular constraint based theorem is the CAP theorem. This theorem states that it is impossible for a distributed data store to provide more than two of these three guarantees:
  • Consistency (every read receives the most recent write or an error)
  • Availability (every request receives a non-error response but not necessarily the most up to date data)
  • Partition Tolerance (the system continues to operate despite an arbitrary number of dropped or delayed messages between nodes)
Given that every modern database must guarantee partition tolerance (because today's cloud infrastructure cannot guarantee that partitions won't happen), the CAP theorem in modern databases can be reduced to an acceptance that any data store can guarantee consistency or availability but not both.

Unplanned Work

Unplanned work happens all the time to all sorts of people and teams and is the enemy of accurate forecasting. Some people in some roles have some reasonably well understood and repeatable methodology for forecasting a sensible level of unplanned work, in which case they may call it a "contingency" plan. Often, building projects will add a fixed percentage for unplanned work and they call it a contingency.

Of course, no amount of contingency or forecasting of unplanned work can substitute for the real solution to unplanned work which is to make your environment more predictable so that you do much less of it.

Why Might you Plan Ahead?

I prefer this question to "why do you need that estimate?" If I have to ask a business owner, "why do you need that estimate?" it means that somebody has asked me for an estimate. Often the reply comes back that "we need certainty over planning" or something similar. Leaving aside the obvious desire to shoot this argument down using 5 Whys ("why do you need certainty over planning?" is usually far enough for anybody to stumble over their own dogma), I will assume that there is a legitimate reason to want to have certainty over planning. 

A desire to have some kind of certainty over planning implies that they backlog represents a relatively small number of large things that are considered to return large chunks of value only when they are complete. Such items are often called "features" or "epics".

Why Might you Always do the Most Important Thing Next?

In some circumstances it is legitimate to ask, when you have capacity to take on more work, what is the most important thing for me to do now? If the most important thing to do can change, and every item on your list of work can be assumed to be independent of every other piece of work and deliver some kind of independent value, then it makes little sense to plan things beyond asking "what is the most important thing for me to do now?"

A Conjecture

Drawing inspiration from the Iron Triangle and the CAP theorem and noting the similarity between a system involving three constraints (one being fixed) I have constructed the following conjecture:

Assuming that the capacity of a delivery team responsible for delivering items in a single backlog remains unchanged, you can either manage your backlog to optimise for doing the most important thing, or you can optimise it for accuracy of prediction of delivery dates. Any given backlog being serviced by a single team of unvarying capacity cannot be optimised for both the most important thing and accuracy of prediction.

Optimising for the Most Important Thing

You must accept that unplanned work at any point could change the definition of the most important thing, potentially at very short notice. 

If you optimise for doing the most important thing at any given point you can only give any sensible delivery date for something that is in progress. Anything else, even if it is at the front of the queue, is at risk of being deprioritised and relegated down the queue. So the best you can ever say is, "this is currently at the front of the queue, if nothing changes priority it will be live in X days", if it is any further down the queue you will have to make a judgement based on your experience of the level of unplanned work that your team will expect and give a confidence interval of delivery dates to the asker of the question.

Optimising for Accuracy of Prediction

How can I optimise for accuracy of prediction? Scrum, in its purest (and arguably most annoying) form, seeks to optimise for predictive accuracy, at least within a "sprint". The idea is that you have a good idea of your team's throughput for a given period (often 2 weeks) called a "sprint" and you "commit" to delivering that amount of work in the current period. But herein lies the catch. After declaring your sprint "commitment", Scrum says you cannot change it. Thus, you must not allow changes in priority, and you cannot allow your team to carry out unplanned (and by its nature unpredictable) work. 

Of course, Scrum predicates some fixed interval of planning a sprint which seems to me to be linked to old fashioned notions of fixed releases at predictable intervals. I would suggest that if you are following Scrum, you should consider shortening your sprints continually until they are a single item in duration. At that point you have moved from Scrum to Kanban with no loss of predictability, provided that no unplanned work will be taken on.

Conclusion

You must make a choice between speed of delivery of the most important item in your backlog or predictability of longer term delivery dates.

As a corollary to the above, if your team is expected to undertake unplanned work, you can never accurately estimate delivery dates.


Wednesday, 4 March 2020

Running a Futurespective

What is a Futurespective?

Most people will nowadays understand, at least broadly, what a retrospective meeting (usually abbreviated to retro) is all about. There are many different flavours of retro, and having worked for ThoughtWorks for 5 years, I met a fair few, but essentially you are looking at a period of work and working out what you did well, what you could have done better and what you might be able to do in order to improve your work in the future. But what is a futurespective?

The Goal

The goal of a futurespecitive is to imagine a future state, one would hope a desirable one, in order to then discuss what things can be done to help us to get there. So instead of saying "what did we get wrong?" which got us to where we are (which is usually where retros end up) we are saying "in order to get to where we would want to be, would do we need to do"? So in this way a futurespective is more goal oriented than a retro can ever be.

Time Frame

It is also worth noting that the time frame under discussion is usually much bigger for a futurespective than for a retro. A retro will generally focus on a period of weeks (sometimes we even talk about a sprint retro, which narrows down to the previous sprint, usually two weeks) or possibly months. At ThoughtWorks we sometimes did stuff like "milestone retros" during long running engagements, or "project retros" if the project in question was a much shorter thing. In a futurespective the focus tends to be much more strategic and therefore the question is likely to be along the lines of "imaging a year from now.... what would we need to do...?"

Different Techniques

I've been involved in a few different types of futurespective. Depending on the audience and the motivation for the workshop, you may choose a different technique. I've been involved in discussions framed around "What would good like a year from now for ThoughtWorks at this company", I've also been involved in "Imagine a year from now, the program we are kicking off is a success, what does that look like?" Right now, I'm talking about the second type of question. I try not to think about the first question in isolation, preferring a well aligned partnership with our clients.

News Headline

Our client has engaged us to talk about Software Modernisation of a particular system that is causing problems. My concern all week has been that we need to make sure that we anchor any modernisation goals within a framework of business value that is understood, articulated and shared. All too often I've seen technology-led change fail because the business value is not well understood outside of the technology group.

The technique that I facilitated today was the "news headline" technique. I asked the group to imagine a year from now that the program has already been delivering on its aims. What might some newspaper (or industry organ) have on its front page to report the remarkable success of our client in the previous year? I asked them in groups, to collaborate to produce a front page with a headline, 3 or 4 bullet points that expand on the headline, perhaps including a quote from "an industry insider" or an executive of this company and maybe a picture. We left them at it for a timeboxed period of 20 minutes. It was important to ensure that the people in each group reflected a cross section of the competences within the room. For example, the two business stakeholders, the two architects and the two product owners were split.

The Discussion

At the end of the 20 minutes we had two nice front pages (on A3 paper) reporting on this ideal future state. I'd love to share the photos here but they all had the client name on them so I can't share the photos without breaking client confidentiality. The discussion we had enabled us to pull out 6 bullet points that describe the aspirational future state from a high level viewpoint that we used to inform the subsequent discussions around the work that we should do and how we should go about doing it.

Outputs

Essentially, (and with some redaction to preserve confidentiality) we learned the following things from this exercise:
  • (our client wants to be) responsive to change and therefore improve its time to market
  • (our client wants to be able to) deal directly with its end users rather than through intermediaries
    • So they need to make it more easy to buy their stuff
    • The clients will save intermediary fees
    • (our client) will need to somehow provide directly the capability that the intermediary organisations have been offering to their customers
  • (our client wants to be able to) approach more partners and therefore needs to make its internal functions more scalable to achieve the capacity to make this possible
  • (our client wants to) compete with some massive players in its business space, this is currently not possible for several reasons which I can't go into here
  • (our client wants) to have a ubiquitous internal language to describe its product function so that it is possible to share understanding better both internally and externally

Next Steps

The outputs are useful to us on a few dimensions. Firstly this is helping us to frame what we may do in terms that mean something to the business. I need to be able to go to an executive stakeholder in a few weeks' time with a vision and a strategy that says "we should do this technology stuff in order to enable this business goal that has been identified".

Secondly, this exercise narrowed the scope of the subsequent discussion over what areas of improvement might be relevant. It helps us say "why would we want to improve that area of your estate when changing it won't contribute to these higher level goals?"

Thirdly, the workshop was fun, it aligned people in the room, and it gives us some cool photographs to use in our presentation back to the client next week when we talk about what we learnt, what we recommend and how we can help them to achieve it.

Monday, 24 February 2020

The Selfish Meme - A Conjecture

The Selfish Gene

The Selfish Gene was first published in 1976. Written by Richard Dawkins, probably now more famous, or at least more controversial, for his 2006 work, The God Delusion, its central theme is the notion that animals and plants are no more than "survival machines" manipulated by Genes to behave in a way that maximises the gene's chance to persist into subsequent generations. Clearly, individual animals and plants cannot be immortal, but genes can be. Certainly, genes persist far longer than individual survival machines.

Gene Alleles

Gene alleles are pairs of genes that compete with each other to persist into another generation. They drive behaviours that are somehow mutually exclusive to one another. The example given in The Selfish Gene is that of a gene that causes aggressive behaviour in animals that fight their own species for resources v a gene that causes passive behaviour in the confrontations in the same species.

Dawkins and Memes

In The Selfish Gene, Richard Dawkins coined a new word. This word is "Meme". His original definition is (I'm paraphrasing) "an idea, behaviour or style that spreads from person to person within a culture". Chapter 11 is entitled "Memes - the new replicators". In this chapter Dawkins describes how memes can be thought of as defining culture to an extent. In this sense, he is using culture to mean the culture of a society. Different cultures around the world and throughout human history have evolved a way to pass knowledge from one generation to the next such that the culture persists even though the individuals within the culture clearly do not.

Modern Memes

Most people probably now think of a meme as a thing, designed to be amusing in some way, that circulates, possibly "going viral", around the Internet. Some of my favourites would be Disaster Girl, XZibit - Yo Dawg (see one I made below) and the classic scene from Downfall. This last one is a sub-genre of meme where you put subtitles on some scene which are hopefully amusing in some way but are clearly not the originally intended dialog.

My XZibit - Yo Dawg Effort

Apparently XZibit is some kind of musician, a rapper I believe. He also was the presenter on a TV program, which I never watched, called "Pimp My Ride". It is from that program that I understand the Yo Dawg meme originates (I'm happy to stand corrected if I'm wrong, Know Your Meme doesn't talk about the origins of the meme, just the recursive structure). What I love about this meme is that "correct" use of it demands a recursive usage. I didn't immediately appreciate this and was told by a colleague that my use was incorrect and it had to include some kind of recursion. At the time, we were working on a Clojure implementation for our client. Imagine my joy then, when a few weeks later, I found out that the Clojure defMacro, which I had assumed was a keyword, was in fact itself a macro. I saw my opportunity to correctly use the Yo Dawg template and came up with something like this (my original is lost in the ether somewhere, so this is my best effort at a reproduction):


The Beginning of Infinity

David Deutsch's "The Beginning of Infinity" was first published in 2011. It is a study of epistemology. This isn't what I expected when I bought the book (I was expecting something about quantum physics or quantum computing and I never read the blurb), but it was still one of the best books I've ever read. Deutsch puts across an interesting conjecture. Running with the Dawkins idea that memes can define culture he argues that the rigid rules of the Spartan culture, passed from generation to generation as memes, eventually placed too many constraints on its ability to innovate. Thus, the Athenian culture, with its memes around learning and progress, was eventually able to finally conquer and all but destroy the Spartan culture. 

Transformation and Culture

So taking the two ideas together, Dawkins idea that memes can define culture, and Deutsch's idea that these memes can eventually be counter productive or damaging, led me to wonder if organisations in which I have consulted can have their culture classified or defined by their memes. If that is so, then perhaps by introducing competing memes (alleles) and somehow altering the value that the memes confer to its vectors, could I have a model for driving positive change?

Conclusion

I don't have a conclusion yet. I've been working on this idea and experimenting with clients that I've worked with. I don't yet have enough data points to draw strong conclusions but it has been fun and I've written a talk about the subject which I'll be presenting for the first time at Aginext in March this year. I was also asked to write an article for InfoQ on the subject. As soon as that goes live I will post a link to it from here.

Sunday, 29 September 2019

Quantum Supremacy and Cryptography

The story broke around September 20th that Google was claiming quantum supremacy. It merited not much more than small footnotes in the popular press. It was enthusiastically received in the technology press which makes me think that it is time to start thinking about a world after RSA is dead.

In 1994 Peter Shor published his paper “Algorithms for quantum computation: discrete logarithms and factoring”. At the time quantum computers were nothing but a theoretical figment of many fertile imaginations. Fast forward to 2019, with Google claiming quantum supremacy, and we should be taking them very seriously indeed.

“Quantum supremacy” means (if verified) that Google has a real quantum computer that can solve a real world problem more efficiently than any classical (digital) computer. If their claim turns out to be true, the chances are that the cost of using this computer is astronomical and certainly beyond the means any individual or probably any corporation. But so were IMB’s machines in their early days.

What we experienced back in the early digital age was what we should expect to happen now. As soon as practical uses exist, money will pour in and improvements in the technology will be rapid, probably exponential. Given that most research (outside of secret government research) has been funded by financial institutions, it is a fairly safe bet that those same institutions will be racing one another to translate quantum supremacy into financial market supremacy.

So why should we be concerned about this and what does it have to do with encryption? Well, Shor’s algorithm factorises numbers. If you can factorise a product of two large prime numbers in reasonable time, you break RSA and related cyphers, which account for pretty much all of the messages in the world now. And that is exactly what Shor’s algorithm promises.

We are a few years away from a time with a powerful enough quantum computer to break current RSA keys but if quantum supremacy is proved, you can be sure that time is closer than you think. And don’t forget, your messages are probably being stored by several government agencies the world over right now. Your messages are definitely secure now, but if you care about your messages being secure 5 or 10 years from now you should be asking why we aren’t using quantum safe cryptography now.

Wednesday, 14 August 2019

Why Codurance?

What Kind of Role Would you Like?

As I've said many times before, we are very lucky to work in the industry that we do. Even if we occasionally moan about what we get paid (and I certainly have done) it is very much a relative moan and not an absolute moan. My point is that most of us working in the technology sector who are any good at what we do can generally command a salary that is way above average and therefore generally "enough", whatever that means. So when I was looking for a new role this Spring and early summer I was doing so knowing that I could afford to select not only on the basis of pay. As I touched on in this post a couple of weeks ago, the fact that you can Google me and find interesting stuff made it even easier for me to get interviews for interesting things.

To be or not to be a Consultant

Very early on in my search I realised that I had a decision to make. Should I continue to be a consultant (and if so, what type of consultancy) or should I think about putting a stake in the ground as some kind of in house development leadership type person. The pros and cons, as I saw them after 4 years of consultancy, of the job were pretty clear:

In Favour of Consultancy:

  • The variety of work is interesting and leads to learning stuff about far more different technologies.
  • If you hate the gig you are on (i.e. your job) you can change to a different gig relatively easy. The ease with which you can change will be determined by the specific consultancy you are in I guess. Effectively though, you can get a new job without the pain and uncertainty of having to look for a new job.
  • There is a certain freedom of consultancy in knowing that you can't be sacked or "managed out" for being too daring or asking searching questions. I certainly played on that a few times where the conversation with a worried client might go "don't worry, if anybody asks, send them to me, I'll take responsibility, they can't sack me".
  • You get to do some travel to some places you may not have been to before.
  • You get to look at really interesting problems and when you solve them (or at least relieve some of the pains) you can move on to another equally interesting problem.
  • It never gets comfortable and therefore boring, and if it does, you can ask to move on.

Against Consultancy

  • You learn a little about a lot of things but don't always get the chance to learn a lot about any specific thing.
  • You start a lot of things and you may finish the odd thing but it is very rare to both start and finish the same thing.
  • The constant change of scenery can be unsettling and there are many periods of onboarding which can be wearing.
  • Sometimes you get sent away from home and don't get to spend time with your family.
  • Travelling on aeroplanes definitely stops being exciting.
  • Positions of influence are common but positions of true responsibility are rare.
I could go on, but the point I'm making is that it was a tough decision for me. I really enjoyed working as a consultant and I learnt loads in four years.

What was on the Table?

I interviewed for 4 or 5 roles outside consultancy. These were all either "head of", "VP of", "director of" Engineering, Development, Software type things or, in one case, a CTO role. I told the recruiters that I wasn't interested in anything that had a scope of less than several teams. So basically, in any small to medium sized organisation this means "head of" or CTO, in a larger organisation it could mean "architect" or "head of" or whatever they use internally. Based on what I enjoyed doing back in my Viagogo days and what I enjoyed doing during my 4 years at ThoughtWorks I think I had a reasonable idea of the type of company that I would like to work for (if I was to go to an in-house role) and, more importantly, the type of organisation I didn't want to work for.

Things I'd Like to work for

  • Somewhere with significant organisational problems that recognises it has such problems and has a good appetite to fix them.
  • An place that has been around for a while as a bricks and mortar, has loads of legacy systems but realises the need to move on and has the courage to appoint the right people to do that.

Things I wouldn't like to work for

  • Brilliantly run startups or scale-ups that have great technology, great engineering capability and practice and no significant organisational problems.
  • Companies that need to improve but either don't realise that they need to change or don't have the courage to do what needs to be done.
So what I really wanted was something with issues but a mandate to fix them. This is the kind of profile of the organisations that I consulted in to for ThoughtWorks, for the most part. I did have experience of one place where the CEO was blissfully unaware that they were heading (still are as far as I can tell) for a massive car crash at some point in the next few years and thus was unwilling to change what he thought was a successful course. That was an example of a company that was doing well on its bottom line despite its practices and capabilities, not because of them and was certainly a lesson for me.

Well funded Startup

One company I interviewed with up to the final interview state was an extremely well funded start up. Its parent is a well known multi national group. This new company was created to implement a brand loyalty scheme across the whole of the group. So it is a greenfield thing but there would be lots of potential pain in integrating the new systems with all the legacy of the companies within the group. The role here was "VP Engineering", reporting to the CTO. The initial responsibility would be building a team and working on the MVP of the new solution.

Well Run Tech Company

The other company I went quite deep with was a fairly well known company with a well known web presence. This company is renowned for good engineering practice and is often cited as a "centre of excellence" (I've never quite understood exactly what this means and it seems to be a bit of a self nominated, self selecting thing). In any case, I started to lose a bit of interest in this company somewhere between the third and fourth stage as I wondered exactly what type of deep problems they may have that would keep me interested.

Consultancies and Organisational Takeover

I spoke to a few consultancies. Some bigger (and uglier) than others. The more I spoke to, the more I realised that I had been lucky in working for ThoughtWorks. It seemed that once a consultancy gets beyond a certain size, whatever principles they started out with seem to get thrown away and replaced with a simple goal to rinse as much money from the clients as possible. I had heard this of really big players in the past ("organisational takeover" is apparently a real business model for some) but I, somewhat naively, assumed this was the preserve of only the biggest few.

Fake Agile

The most disappointing thing about consultancies in general is the promise of being an "Agile Consultant". There are many of these around, a lot of the smaller variety. I can't speak for all of them but I spoke to people in one or two and specifically asked them how they go about selling Agile to their customers. Sadly, I didn't get any kind of good answer. I was left, in every case, with the overwhelming impression that these consultancies were not only selling snake oil to unsuspecting clients but that they really didn't even get it themselves.

Worrying Conclusion

So after speaking to a few consultancies and going quite deeply into some interview processes for non consultancy work, I was left with the worrying conclusion that I would find it hard to find a company to work for that was sufficiently broken to need me (or keep me interested) but that also had the sufficient level of executive support for the work that I would know would need to be done. On the other hand, I was struggling to find a consultancy company that I could feel comfortable working with.

Meeting Codurance

I spoke to a recruitment consultant some time in May about a company called Codurance. I was starting to get disillusioned with my search and was resigned to a long summer of frustration. The consultant made all the right noises about this fairly small company and their ethos, largely based around software craftsmanship. He also spoke about how Codurance has no internal hierarchy, an interesting approach to innovation and a progressive set of policies on salaries. So I agreed to find out more.

Software Craftsmanship

The <Title> tag of the Codurance website includes the phrase "Software craftsmanship and Agile Delivery". It is clear that Sandro feels quite strongly about the importance of software craftsmanship. Indeed, in the early chapters of his book, the Software Craftsman, he argues that software craftsmanship should be a core part of Agile delivery but that this is often overlooked in the belief that if you follow the right process everything will work out. I can't argue with this assertion at all. For years I worked under the assumption that code quality was essential to the long term health of a system. I love the emphasis on craftsmanship (although I wonder if we can invent a term that is a bit more gender agnostic) and I do agree that it can be a forgotten element of effective delivery.

London Software Craftsmanship Community

I was even more encouraged when I discovered that Sandro founded the London Software Craftsmanship Community, a meetup group. After my experiences of the previous few years I realise how important it is for a company not just to make the right noises about culture and community but to actually do the right things, and believe in the right things, as well. In addition to the meetup group, Codurance also runs the Software Craftsmanship London conference every year at Skillsmatter in the autumn. So it was immediately apparent to me that Codurance does not just talk the talk but quite clearly it walks the walk too.

Agile Delivery

The other part of the title tag is Agile Delivery. In 2019 I would expect anybody doing software delivery to at least claim that they are Agile (or agile). I know from my travels that fake agile is everywhere and I was keen to understand what Agile means to Codurance. So when I spoke to Steve Lydford, head of PS, before being invited in for a face to face interview, it was very pleasing when he told me that he had watched a video of my Agile is a Dirty Word talk in which I vent (and despair a bit) about the prevalence of fake Agile practitioners, fake Agile methodologies and fake Agile consultants. That formed the basis of a good discussion between us which convinced me that Codurance as an organisation properly understands what Agile (and agile) means and that my experience of Agile delivery would be appreciated

The Opportunity

Codurance is a much smaller company than ThoughtWorks, which is itself pretty small compared to the global players that most people will have heard of. We are starting to grow organically and the current challenge, or opportunity as I like to call it, is to win more work that we would consider to be partnership type work rather than "pair of hands" staff augmentation work. We haven't got much experience of creating fully cross functional teams and this is our current challenge. How do we win and retain this type of work? This is the challenge we are facing now and my experience before Codurance seems to have been the most interesting thing for them looking at me.

Self Managing "Teal" Organisations

Steve recommended me a book to read, Reinventing Organisations, which describes a relatively new type of organisation that has evolved in recent decades. The author calls these organisations "Teal Organisations", which are essentially post hierarchical organisations based on management through self organisation. As I remember it, he used colours to categorise different types of organisation merely to avoid the name implying any kind of meaning. Having worked at ThoughtWorks, I know all about what it means to work in an organisation without hierarchy and I would absolutely not be able to work in an organisation that had anything other than a flat structure so this was all encouraging.

But there is flat structure and there is self organisation and true buying in to self organisation. Certainly I have encountered organisations that claim to have a flat structure but really they have a hidden hierarchy and command and control is everywhere. My early conversations with Steve suggested that Codurance was more genuinely bought in to self organising entities.

Innovation Circles

Through reading Reinventing Organisations and talking to people who have been involved in organisations that are trying to be self organising to a greater or lesser extent, the problem of how to change policy and ways of working comes up again and again. There are many different solutions and I would recommend anybody to read the book to find out how some organisations deal with this. In Codurance if you have an idea to change something you start an innovation circle. This has to be open to anybody that is interested and the discussions must be public. There is a rule on what constitutes a quorate group but the innovation circle is empowered to make changes and to implement them provided that the proposal passes the culture and financial tests. There is no need for approval from any "higher" power.

Open Salary Policy

Codurance has an open salary policy. I was told this by the recruiter when I first spoke to him and this was mentioned in every conversation before I started. Essentially this means that we have a spreadsheet that we can all look at that lists everybody in the company and what their salary is. At first this seemed a little alien and maybe scary but the more I considered it, the more I thought it was a great idea. I ended up reasoning to myself about what the difference could be between a company with open salaries and a company (i.e. every one I've ever worked for previously) that does not have an open salary policy.

Why Not have an Open Salary Policy?

In almost every place I have ever worked there is suspicion and rumours around salaries. Why would you not just publish everybody's salary so that everybody knows? In almost every company it is perceived that the barriers to promotions and pay rises are lower for people coming in from outside than they are for internal people. I don't know why this is but it leads to people changing jobs frequently in many cases as this is perceived to be the only way to get a decent pay rise or a promotion. If this is true, then I can see why you wouldn't publish. If somebody comes from outside with the same job as you but with no domain knowledge and they were getting paid more than you, you will likely be very upset, and rightly so.

I think perhaps the bigger point is that to move to an open salaries policy would mean that fairness in salaries across the organisation would inevitably have to follow and that fairness would cost a lot of money to implement properly. Either lots of people would have to be given pay rises to bring them in line or many people would leave as they perceived that their pay was still unfair even after some adjustments. Certainly nobody would be volunteering to take a pay cut. The cost of these adjustments would just be too great for most organisations to contemplate.

How This Worked Immediately

At my final interview I was told that I would be offered a role, the only question was how much the offer would be. I asked for a number and was told that this was quite a bit more than the only other person at the proposed grade. I pointed out that they had already told me that I would bring additional expectations because of my previous experience. This was accepted and they then told me that any offer had to be agreed by all the people that had interviewed me. As one of those was the person in question, who might be upset by my proposed salary, this seemed perfect to me. Not only would this other person know the outcome of any offer but would actually be involved in that decision.

Conclusion

I took the offer at Codurance because I was excited by the challenge of helping us to grow and mature our offerings, I was enthused by what I see as a genuinely flat, self organising structure and I love the culture of genuine openness that is obviously real and not just a veneer. I'm so far very happy with my decision and very happy to be a Principal at Codurance.

Wednesday, 31 July 2019

Quantum Computing and Me

Why Quantum?

If you are one of my colleagues or close (technology) friends you'll know that I've been researching quantum computing for a while now. It all started back at Devoxx Vienna in March 2018 when I saw a great talk by Alasdair Collinson (who I've since become good friends with) entitled "The Quantum Computers are Coming". At the time of seeing Alasdair's talk my knowledge of quantum computers was close to zero. I had vague recollections of reading something about quantum key exchange in a book about codes and cyphers years ago, but I've since learnt that isn't really to do with quantum computers anyway. I was lost pretty early in the talk and I wanted to ask him some questions and also give feedback. I thought he should try and give a little more basic material at the start to try and help people along.



Alasdair's was the last talk of the day and if you watch the video, you'll hear Alasdair mention "[releasing the audience] to the beer" at the end. Immediately after that talk was the free beer party. I didn't manage to talk to him during the party but I knew the speakers were being taken for dinner that evening so I thought I would talk to him later.

Unfortunately by the time we got to the speakers' dinner either I was a little tipsy, or Alasdair was, or more likely we both were and we didn't manage to have too useful a conversation. I remember thinking to myself "how hard can it be?" I'll do some research and learn the subject myself and maybe I can do a job of making a more accessible version of this quantum computing talk malarky.

When the conference was finishing up I had a chat with one of the organisers. He told me that he was involved with a couple of conferences in Krakow and asked if I had submitted to them. I hadn't and so he asked me if I wouldn't mind submitting. The CFPs were both closing a day or two later so he urged me to submit as soon as possible. When I got home that evening I decided to submit the same talk that had got me into the Vienna conference and, on the spur of the moment, I knocked up a synopsis of a talk about quantum computers and submitted that as well. I fully expected that the Polish conferences would pick my well known talk on Microservices if they picked up anything. As it turned out, I was wrong. And a few weeks later, having not done much about learning the subject, I suddenly had around 6 weeks to prepare a talk for Devoxx Poland on a subject I knew barely anything about. The result of my learning can be viewed here.

Learning Quantum and Duncan Mortimer

A few weeks after Vienna I was lucky enough to roll off my project and have a couple of weeks on the beach. This was my opportunity to learn about quantum things. I was chatting to somebody in the kitchen in our office and one of my colleagues, Duncan Mortimer, overhearing the conversation divulged that he was interested in quantum computers. It turned out that he was an enthusiast, had studied the field at university fairly recently (he's a lot younger than me) and we agreed to pair on a talk at the ThoughtWorks Away Day. This chance encounter made all the difference as Duncan was able to help me understand enough to cobble together a coherent story for Poland with, crucially, a very basic demonstration of quantum code using Q#. If you watch that first effort of mine at talking quantum you'll see that I used Duncan as a humorous element to essentially excuse my lack of understanding at certain points in the presentation.


A ThoughtWorks Quantum Strategy

At the Away Day I chatted with the ThoughtWorks global head of technology and in the brief chat we had he mentioned that nobody in the UK (or anywhere as far as he knew) was taking the reins of quantum and moving it forward. We need a global strategy on quantum he told me. Would I like to take that on? I told him I had no idea what this means so we agreed to have a chat afterwards. I therefore took on the responsibility of trying to make ThoughtWorks "Quantum Ready". My pitch to anybody that I could engage on the subject is that at some point in the next 5 to 10 years, quantum computers will be a commercial reality in some form. When that happens we (ThoughtWorks) need to be in a position to take advantage of the new opportunities this will create.

Meetups and Conferences

Obviously I needed to know more about the subject. I had managed to go to just one talk in London about quantum computing before the first conference in Krakow. It was a great talk at Microsoft by Dr Julie Love about the Majorana Topological Qubit. They believe that this is the technology that will lead to a stable, less error prone, qubit than can currently be realised by other techniques (of which there are many) and will therefore ultimately give Microsoft an advantage. This was the first time I had gone to a meetup (other than those held at the ThoughtWorks offices) for many years and I realised a while later that the ThoughtWorks culture coupled with this new exciting field had finally combined to reawaken my interest and love for technology that had bene stifled and crushed by the toxic culture and terrible working conditions at my previous job.



Throughout the remainder of 2018 I went to many meetups organised by the London Quantum Meetup group and got to know the organisers of that group. Every time I learnt new things I was able to remove some of the cruft about my learning journey and maybe correct some of the stuff in my original presentation that I had wrong or modify the way I could talk to parts of it to reflect my increasing knowledge. Thus my presentation became a living record of how my learning moved along and I have preserved it as it was when I presented it at various points to various conferences (something I do with all my decks).

Quantum Hackathon

I started going regularly to meetups organised by the London Quantum Computing group. If you live or work in London and you are even vaguely interested in quantum computing I thoroughly recommend you go to some of these meetups, they are great. One of the organisers happened to tell me that they were trying to organise a quantum hackathon. The idea would be that we get a group of people together to work on an organic chemistry problem (a solved problem I might add, quantum computers aren't yet powerful enough to tackle the stuff that classical computers can't solve). Two companies from Cambridge, Dividiti and Riverlane provided some open source software support and IBM were on hand to give us priority access to their IBM-Q computer. The event was a great success.


Manchester Workshop

After the success of the quantum hack day I was asked by our Manchester office if I could organise something similar in Manchester. Unfortunately this wouldn't be possible because I didn't really organise the London thing, I just (through ThoughtWorks) provided the venue and the food. I did suggest that I could give a presentation and perhaps an evening workshop on how to program quantum computers. This was enthusiastically agreed to and I fixed a date with the Manchester community coordinator.

So one lunchtime, we got together in the Manchester office kitchen and I gave a long version of my conference talk (much evolved from the original) which was very well received. The audience was mainly ThoughtWorkers with a few outsiders (it was advertised as a public event) thrown in. 

The evening was a lot more nerve-wracking. The event space was full (I was told 60 people) and this was mainly external visitors. Even though we had advertised the event as "bring your own computer and play along with the presenter" nobody seemed to have read that, or at least nobody seemed to have followed that path or installed the software (Microsoft quantum developer kit) as we had asked them to. So the result was an hour of me talking people through how to use IBM-Q, a five minute break and then an hour and a half of me explaining Q# and showing demonstrations.

Shor's Algorithm

When I gave a talk at a conference in Poland in September it bombed badly. Amongst the torrent of dire feedback were some really useful comments that I determined to act upon in future. One such comment was that the demonstrations I gave were trivial and didn't really demonstrate anything that a quantum computer could do that a classical computer couldn't. This was fair and I resolved to address it.

So in the week leading up to my trip to Ukraine I found myself implementing Shor's Algorithm from first principles. The Q# samples provided by Microsoft actually has a version of Shor but I couldn't really understand it properly and, further, I felt that it was a sub-optimal implementation because the quantum computer was doing all of the work whereas it should only be used to do step 4 (the quantum period finding routine). In my mind, as well as demonstrating the Quantum Fourier Transform (QFT), implementing Shor is a great way to showcase how you should selectively pass control between your classical computer and your quantum computer, only using the (very expensive) quantum compute power for the parts of the algorithm that can't be done on a classical computer.

On the day of the talk I had a wonderfully written, test driven, example of the whole of Shor's algorithm but using a C# algorithm to find the period. All that remained was for me to write the quantum period finding routine and plug it in. Sadly, this was much easier said than done. In the end, I had to compromise and I effectively "lift and shifted" that part of the Microsoft implementation into my own code, which included an implementation of the "Fast QFT" that I didn't fully understand. You can look at my implementation (which has barely changed since I originally made it) on my Github.  

I ran it successfully in the speakers' room about an hour before my talk and sat back in satisfaction. Then I ran it again and it failed. And again. And again. And each time the failure took ages and appeared to involve some kind of .NET kernel memory overflow. Not good. When I was close to despair, it decided to work again so I took a screenshot of the results in case, as seemed likely, it failed in the subsequent talk.

Here is the slide that made it into my deck:

Andrew Bryer

In early 2019 I found myself working in the ThoughtWorks Manchester office a couple of days a week. One of the Manchester people who had reached out to me to see if I would run something quantum based in Manchester had been Andrew. When I started working in Manchester he was on the beach looking for something to do. I got talking to him and asked if he fancied doing some research in Q# and helping me out in understanding how better to implement the QFT. He was only too pleased to help out. So I lent him a book I have, Minds, Machines and the Multiverse, instructed him to read the chapter about the QFT and help me to implement it in Q# without using any library functions. It didn't take him long.

Then, I asked Andrew to look on my Github and see what I had done in my implementation of Shor and just to concentrate on implementing the quantum piece. The first thing he did, after about half an hour if memory serves me correctly, was to tell me that he had found the issue that was causing my implementation to crash and had fixed it. He then proceeded to implement the quantum period finding routine from first principals using his own QFT implementation. I thought that would take him a while, but once we'd established how Q# can be used to link any action to a control qubit (a great feature but I have no idea how it would be supported in a real quantum computer) it didn't take him long. When I was on the train home to London from Manchester that evening, I got a message from him.

So there are a couple of morals to this story. Firstly, ThoughtWorks grads are brilliant. If you have a pure software problem to solve, even if it is in a paradigm that they had never heard of until a few hours previously, they are brilliant at solving it. Secondly, quantum computers are a long way off being actually, practically useful!

Using Many Worlds...

After doing all the research up until the back end of last year and trying to understand the "real" quantum algorithms (such as Deutsch-Josza, Shor and Grover) I realised that I couldn't reason out why they were so powerful without understanding things in more depth. I started to read more books about the fundamentals of quantum physics because so many of those "basic" concepts need to be understood to grok what is going on inside of a quantum computer. This helped me to understand up to a point and certainly helped me to understand the quantum state, which is usually expressed as state vectors, often using bra and ket notation (which I'd never come across before). This work was also, for me,  an excellent refresher course in some of my long forgotten university mathematics.

But something was still missing. I remember in the early days often hearing phrases such as "the qubit is in superposition" or "...holding both the value 0 and 1" and I think I had used similar phrases myself whilst internally holding a picture of the Bloch Sphere and imagining a thing with a state entirely represented by the quantum state arrow on that Bloch Sphere. There are at least a couple of problems with holding that view in your mind.

  • Firstly, as alluded to in my previous sentence, the Bloch Sphere is really only valid to help to understand a single qubit. 
  • Secondly, we need to be able to convey that the probability of returning a 1 or a 0 is not the full picture; we need to be able to convey that any computation that the quantum algorithm will carry out will use ALL of the possible values of ALL of the qubits that take part in the computation. 
  • Thirdly qubits interact with one another and generate interference patterns which can be analysed and exploited.

Many Worlds Interpretation

One of my breakthroughs in understanding came while I was reading (I think) The Fabric of Reality possibly at the same time as reading Introducing Quantum Theory, a Graphic Guide. I had heard and read in one of his different books, that David Deutsch was a believer in the Many Worlds interpretation (MWI) of quantum physics. In fact it could be said, if I understood correctly, that he believes that the very fact that an algorithm such as Shor works, proves that the MWI is correct. I like his assertion that whilst the Copenhagen Interpretation correctly predicts the observations associated with quantum mechanics, it doesn't explain them. On the other hand, MWI predicts observations and explains them.

Young's Double Slit Experiment

Many people will remember carrying out a version of the double slit experiment at school. This was first demonstrated by Thomas Young in the early 1800s. He devised it as a way of "proving" that light is propagated by waves and not, as Newton had suggested and was the prevailing theory of the time, by particles, which Netwon had called "corpuscles". Thus, by showing that light can be made to create an interference pattern, he showed that light must travel as a wave.

But, in the early part of the 20th century, physicists were questioning again whether this view was correct. Indeed, much of the early work in the new quantum mechanics of physics indicated that light was, in fact, propagated by particles, called photons. This led to the creation of the concept of "wave particle duality". The explanation was that as light travelled the photons would interfere with one another so that explains why the interference pattern and the particulate nature or light are not inconsistent.

So far, so consistent. But then, in 1909, Geoffrey Ingram Taylor performed the experiment with low intensity light so that photons were mostly emitted and absorbed singly such that their flight paths from emission from the light source to detection by the detector did not overlap. Subsequent experiments have guaranteed that only one photon is in transit at one time. The remarkable result of these experiments was that if you allow them to run for a time so that each photon incident on the detector has its position recorded, then the interference pattern builds up over time. So interference is still happening, even though there is no other thing in flight to interfere with. The inference drawn, if you believe in MWI, is that the photon, although apparently the only photon there, is interfering with the "ghost photons" that are taking the other possible paths in all the other universes. So far this is the best explanation of what is going on in Young's double slit experiment (single photon version). The Copenhagen interpretation has no explanation for the interference patterns in the single photon version.

Making Videos

In the interests of making my talks more interesting, I revamped the "is Quantum a thing" title into "Using Many Worlds to Solve the Unsolvable". At the same time I made a video with some ThoughtWorks colleagues in which we reproduced the double slit experiment using some crude materials and a laser pointer. That video was debuted in my talk at Codemotion Amsterdam in April 2019 (but I can't find a link to that) and I used it again in Minsk in May 2019.

Adding a live video seemed to go down really well and by summer 2019 I had done a couple of lightning talks that just included the section on Shor's algorithm with a suitable level of abstraction and simplification around the QFT. They were really popular. So much so that I persuaded Devoxx Poland to give me a 40 minute slot to expand the idea and to do a bit of work around the post quantum cryptographic landscape. For that talk I wanted to do another video, this time using my daughters to help me at home. We used polarised light filters to demonstrate how light photons are polarised and thus to lead in to a talk of BB84 quantum key distribution. This longer talk has so far only aired at Devoxx Poland and to private audiences.

Right up to Date

In July 2019, I left ThoughtWorks which was very sad (as I've noted in several places). One of my sadnesses whilst still at ThoughtWorks was that I found it very difficult to get any kind of management interested in quantum computing because it just isn't commercially interesting yet, apparently. When I came to Codurance, a much smaller and more nimble consultancy, the response I got was very different. The small commercial possibilities that did not interest my previous employer are very interesting here. So hopefully I will get the support to continue my passion for learning new things in the quantum space (and giving talks about it) as well as, hope against hope, maybe we'll manage to find some real work to do in this arena before too long. No doubt I will write about it if it ever happens!

Tuesday, 30 July 2019

Tech Debt Discovery and Cost-Value Quadrants

Introduction

There are various reasons why technical debt is allowed to build up and various reasons why technical debt isn't paid down. I have previously written about this and I won't go into that again. The point of this post is to briefly discuss a method for understanding what technical debt exists, making it visible and stopping it from getting any worse. So for the purposes of this post I'm going to assume that tech debt is a thing, there are things that exist that you can call tech debt on your product and that you want to tackle them.

Cost-Value Quadrants

The idea of a cost v value quadrant is far from new. I can't remember the context of when I was first introduced to them but I've certainly used them many times to help to priortise stuff in many different contexts. The basic idea is that you have a few things that you want to compare. Each of these comes with a cost and each of them has some kind of value. If you draw two axes, one representing value (I usually go low value on the left to high value on the right) the other representing cost (I usually go high cost on the bottom up to low cost at the top, or "hard to do" on the bottom up to "easy to do" at the top) then you have divided the area into "hard to do, low value" (bottom left), "easy to do, low value" (top left), "hard to do, high value" (bottom right) and "easy to do, high value" (top right). This whole idea is not dissimilar to the Eisenhower Decision Matrix.



It is important to note here that cost and value has been the most often used things for the two axes in one of these quadrants. However I have used other dimensions for such exercises. For example, if you have a sense that there are multiple dimensions that need to be considered when comparing a load of things, it might be helpful to first decide (maybe by using sliders) what the most important two dimensions are and then place all the stuff in a quadrant that only considers those two dimensions. This can potentially help to whittle down a large number of possible things to a smaller number before applying some other criteria in turn.

Why go from High Cost to Low Cost on the vertical Axis?

Many people that I have come across have asked me why I effectively put high cost on the bottom of the axis up to low cost at the top of the vertical axis. The answer is simple, I want to focus on stuff in the top right of the resulting quadrant. For some reason I have always found it pleasing to get to a state where we concentrate on things in the top right. That is just me though, if you want to go Low cost to High Cost, feel free. You'll just then concentrate on the bottom right quadrant in a cost v value comparison.

Tech Debt Discovery Workshop

I have seen many organisations handle tech debt badly. Some places are all too ready to create it, "we don't have time for gold plating, get the thing done and we'll fix it later" is a common attitude. Some other organisations create tech debt because their engineering practices are not mature enough, or their quality engineering function isn't strong enough, to understand or prevent tech debt accumulating, "what is a contract test?". The purpose of a tech debt discovery workshop is to socialise what tech debt is, in all the forms that we know that it exists in this organisation, surface all of the items that currently exist and get a basic idea of prioritisation for them.

Setting up the Workshop

The setup of the workshop is very simple. You need a whiteboard with a quadrant on it (as above), some people who know what tech debt items that need addressing, some stickies and some sharpies. Once you've done the setup, your whiteboard should look something like this (the worried looking client tech lead running his first ever workshop and the extraneous stuff on the whiteboard are optional):

If you zoom in closely you'll see that we drew two axes on the whiteboard behind the gentleman in the pink t-shirt. Whoever is facilitating the workshop needs to explain clearly to everybody what the goal of the workshop is (in this case, discover how much tech debt we have and get a feel for what we might tackle first), and explain what the quadrant on the whiteboard is.

Stage 1 - Crowdsource the Items

As with a lot of workshops, one of the goals of the workshop is to get everybody to think they contributed to the outcome and thus get buy-in from all of the people in the group for whatever outcomes you are going to arrive out. Crowdsourcing is a great way to do this. So give everybody a load of stickies and a sharpie and ask them to write down in each stickie something that they'd like to fix that they think is tech debt. They then stick all of their stickies on an appropriate point of the quadrant. It is important at this point to stress that they shouldn't take too much time getting the thing in exactly the right place. There will be time later for a group discussion and adjustments. In my example, after this first stage the board looked like this:


Not that it is far from uncommon in such a workshop for stuff to be grouped to the right of the vertical axis. It is pretty natural for people to only think about stuff that is valuable, hence only stick things on the "high value" side of the picture.

Stage 2 - Discuss, Group and Place Items

After everybody has thought of everything that they are going to think of the facilitator needs to go through every item in turn, make sure everybody understands what it means group it with any duplicates or similar items and position it somewhere sensibly. Note that this has to be an iterative process. We are not putting an absolute value or implementation cost on each item, rather we are interested in saying "is this more valuable than that?" or "is this harder to fix than that?". Also at this stage, the discussion may well prompt more things to get surfaced or some items to consolidate into larger items. If this happens, fine, add them as you go. At the end of this stage, you should have your final session output that looks something like this:


Again, the happy looking tech lead who has just successfully facilitated his first workshop is optional

Next Steps

The next steps will depend on the situation you are in. In the case of the workshop that we ran in which I took the photographs in this post, we had not yet put anything live in our product. We were delivering an internal PaaS offering. Our priority in the early part of the project was to get things ready for the internal teams to use but not necessarily production hardened (unless it would cause undue rework to do the two things separately) and thus we were counting a lot of as yet not done stuff as technical debt for the purposes of the workshop. As we knew that we weren't prepared to go to production without a lot of these items, we made stories out of all of them and prioritised the ones that were necessary for production hardening or that were otherwise high value according to the quadrant.

Keeping the Quadrant As a Project Artefact

As I alluded to at the start of this post, and as I have written about before, some organisations have a hard time getting to their technical debt items. An anti pattern I have often seen is when teams resolve, sometimes at the behest of a CTO or senior manager, to allocate a percentage of their sprint time to debt items. This is problematic in my experience for at least 3 reasons. Firstly, to put something in a sprint plan means it must exist in your card management system and thus you are paying an inventory management tax. Secondly your devs will likely (though not always) want to work on something new rather than something seen as a boring maintenance task. Thirdly, as soon as your team comes under pressure to get all the things done in the sprint that they "committed" to, the debt items are the scope that gets cut. The last of the three is the most dangerous and that type of attitude is probably one of the major reasons why you have a load of debt to deal with. I covered this in the technical debt cycle in an earlier post.

So my favourite way to keep on top of technical debt is to keep the quadrant as part of the product wall. Every time you knowingly create more tech debt, or you discover some tech debt, write out a new debt card and stick it on the wall. Then every day at stand up or tech huddle or whatever, discuss this new thing, get agreement from the whole team that it is a thing and also where it should live in the quadrant. Adding things to the wall should come to be something that people just don't want to do or at least do reluctantly.

Tackling the Debt Wall

Hopefully the stuff on the wall is small stuff, like add one test or refactor one thing. I have had great success in various teams by doing "debt days". Either one day a week, or maybe one afternoon, everybody drops the thing they are working on and picks something off the wall. Either they pair with another developer to learn something new or just to share context or they do something really simple on their own because they think it will be fun or maybe because the thing is something that is causing them some personal pain. It doesn't matter. The outcome will either be that the thing is fixed, or if it turns out to be too complex, maybe there could be a story to play or a spike to consider or a discussion to have. In any case, everybody has a fun time, hopefully, and the debt pile is reduced.