Posted on May 7, 2012 in Uncategorized by adam3 Comments »

I have come to realize in the last two weeks that there are two words in the Agile lexicon that really bug me. Not because they are bad per se, but that they are so commonly misused and abused. Not to mention being insanely context sensitive.

I’m doing what might be considered an agility assessment for a company and am entering the report generation phase after a number days of observation and data gathering. There is nothing too ‘OMG! MY PANTS ARE ON FIRE!!!’ but one thing that sticks out is their working definition of ‘Done’.

I don’t think anyone who has played the role of Agile coach or consultant has not written a blog or paper at some point on ‘Done vs Done Done vs Done Done Done’.

This company has four different teams which are pretty autonomous and effective and one of the things each have done is define their own understanding of Done. While this seems like a good idea on the surface it is really quite dangerous. With each team having their own definition of Done there can be no common understanding when something is Done and that information rolled up with other non-contextual information to management.

Having the development teams generate the definition of Done also means it is limited to their scope of influence, care and interaction. The net result is a very naive scope of Done. Stories that are ‘done’ frequently will go out into production without adequate analytics hooks or means of monitoring the health of the component being created.

This nicely dovetails to the other most dangerous term; The Whole Team. I was at an automation workshop this past weekend and the people there spent almost 45 minutes talking around this subject. In retrospect no one asked what everyone meant by ‘the whole team’. This is another area I suspect Agile teams need to work better on. I would bet that most people’s working definition of the whole team includes the developers, the testers and someone in some sort of product ownership role (either directly or in proxy). But what about the analytics team, or the sales folks who use the output from the analytics team? Or the Ops team, including the DBAs that need to push the code out onto the iron? How about the customer support people who deal with the brunt of the code team’s mistakes?

The list is not endless, but it is large. One thing I like from the Lean Startup [Cult|Movement] is the the story isn’t done until it in production getting validated by real users. The user is on the team whether they realize it or not. We’ll just gloss over the lack of humans in the release process…

I think there is actually some sort of circular feedback loop at play between Done and Whole Team and you cannot address just one in isolation. Instead, really figure out who your team’s stakeholders are and what skin they have in the game. Only then do you have a chance, and only a chance, of coming up with an appropriate, inclusive, accurate definition of Done.

Posted on May 1, 2012 in Uncategorized by adam1 Comment »

I’m most of the way through a client on-site engagement to do an ‘assessment’ of where their testing processes are and to suggest tweaks to how they do things. I have some broad recommendations that will morph into blog posts over the next month or so, but right now I am challenged by coming up with something that doesn’t sound too much like a consultant-y dodge around ‘When will our [functional] automation remove the need for our extensive manual regression testing?’.

This is of course a trap.

There is no way to determine this number with any degree of certainty.

But what if you really needed to provide an argument around the inability to predict this? My current, best argument around this is that we, as humans, we are really bad at predicting the future.

  • Just because you are having a certain degree of automation success now, does not mean that the rate of success will continue at all let alone along the same trajectory.
  • Is the competency of the developers going to change? For better? For worse?
  • Is the competency of the testers going to change? For better? For worse?
  • Will the market fundamentally change?
  • Will the types of stories that come into the development groups remains such that there is a high degree of [perceived] value from automation?

All of these are completely out of control of, well, everyone. Yes, even the ones about the competency of staff. Sure, you can change the way you hire and the way you train, but you can’t change the fact they are people and people are messy. There are however things that we can at least fool ourselves into thinking we have some degree of control over. Though not accuracy…

  • This particular client is migrating two older application stacks to a new one, so the source of some of the regression risk (the older stacks) will be removed from the equation (replaces, of course, with a new set of risks but the plan is they will be less).
  • With [better] exploratory testing will regressions be caught during ‘feature’ testing?
  • If the developers adopt a TDD approach to code generation rather than ad-hoc post-coding security blanket unit testing, will that decrease regressions?
  • Will the internal focus on a single browser allow for more focused testing (with breadth covered though a testing partner)?
  • Will proper management of environments lead to less false positives, less re-testing and/or idle testers?

Because we cannot predict the output of any let alone all of these things, which are all inputs into the equation that pops out a date for that magic point where we could be a half-step away from Continuous Deployment (no humans) then ‘tomorrow’, ‘July 23’, ‘Q3 2013’ are all correct. And all wrong.

But if we think of each of these are part of the overall system that results in regression duration that is deemed unacceptable and/or too long then we can address them each individually there is likely to be improvement in the overall system. To what degree and by which contributing measures is however anyone guess. And only a guess.