I’m most of the way through a client on-site engagement to do an ‘assessment’ of where their testing processes are and to suggest tweaks to how they do things. I have some broad recommendations that will morph into blog posts over the next month or so, but right now I am challenged by coming up with something that doesn’t sound too much like a consultant-y dodge around ‘When will our [functional] automation remove the need for our extensive manual regression testing?’.
This is of course a trap.
There is no way to determine this number with any degree of certainty.
But what if you really needed to provide an argument around the inability to predict this? My current, best argument around this is that we, as humans, we are really bad at predicting the future.
- Just because you are having a certain degree of automation success now, does not mean that the rate of success will continue at all let alone along the same trajectory.
- Is the competency of the developers going to change? For better? For worse?
- Is the competency of the testers going to change? For better? For worse?
- Will the market fundamentally change?
- Will the types of stories that come into the development groups remains such that there is a high degree of [perceived] value from automation?
All of these are completely out of control of, well, everyone. Yes, even the ones about the competency of staff. Sure, you can change the way you hire and the way you train, but you can’t change the fact they are people and people are messy. There are however things that we can at least fool ourselves into thinking we have some degree of control over. Though not accuracy…
- This particular client is migrating two older application stacks to a new one, so the source of some of the regression risk (the older stacks) will be removed from the equation (replaces, of course, with a new set of risks but the plan is they will be less).
- With [better] exploratory testing will regressions be caught during ‘feature’ testing?
- If the developers adopt a TDD approach to code generation rather than ad-hoc post-coding security blanket unit testing, will that decrease regressions?
- Will the internal focus on a single browser allow for more focused testing (with breadth covered though a testing partner)?
- Will proper management of environments lead to less false positives, less re-testing and/or idle testers?
Because we cannot predict the output of any let alone all of these things, which are all inputs into the equation that pops out a date for that magic point where we could be a half-step away from Continuous Deployment (no humans) then ‘tomorrow’, ‘July 23’, ‘Q3 2013’ are all correct. And all wrong.
But if we think of each of these are part of the overall system that results in regression duration that is deemed unacceptable and/or too long then we can address them each individually there is likely to be improvement in the overall system. To what degree and by which contributing measures is however anyone guess. And only a guess.