Posted on July 31, 2008 in Quality by adamNo Comments »

On Monday I sat in on a webinar held by Martin Fowler and Jez Humble; both of ThoughtWorks. Like most of these it was supposed to be a thinly veiled advertisement for Cruise which is their new release management application built on top of CruiseControl, however, they had so many slides (34 for an hour long talk) that they go through the technical stuff and missed most of the marketing stuff.

Here are my notes

  • Release Management is all about making deployment a non-event
  • Cruise, and RM tools in general are a continuing evolution of better people interaction processes
  • Two main principles of Lean are JIT (just in time) and auto-nomination (Stop the line!)
  • Implementing Lean Software Development by Mary Poppendieck
  • Lean is all about finding constraints in your system and removing them
  • Seven principles of Lean
    1. Eliminate waste
    2. Build quality in
    3. Amplify knowledge
    4. Defer commitment
    5. Deliver fast
    6. Respect people
    7. Optimise the whole
  • It is not done until it is delivering value
  • How long does it take to deploy a single line of code? And is that deployment a repeatable process?
  • Make irreversible decisions at the last responsible moment
  • Make decisions easy to reverse
  • If something hurts, do it more
  • Build your binaries only once
  • Separate configurations from your binaries
  • Automation (of processes) has diminishing returns
  • During polls of the audience there was a large trend towards already having things automated (confirmation bias of the content?)
  • Efferent and afferent code couplings
  • The numbers produced by (performance) metrics might not be useful, but the trends tend to be

And here are the slides

Posted on July 29, 2008 in Quality by adamNo Comments »

Here is Day Two’s notes from TWST4 – Deception and Self-Deception. If you missed them yesterday, here are Day One’s notes.

  • Testing is a skill of the mind, not a software tool that glows red or green with news of the trivial – James Bach
  • The narcotic comfort of the green bar. Note that I didn’t say ‘addictive’; I said ‘narcotic’ – Michael Bolton
  • Giving something a name does not make it that something. Just because it is called a unit test does not mean that it is a unit test
  • Is it really a walkthrough or is it a walkaround? Or a walkover?
  • How does one stop playing deception games with ‘smart’ builds
  • Are people trying to build good code, or keep their job?
  • We shape our tools; thereafter our tools shape us. – Marshall McLuhan
  • We are tools of our tools – Henry David Thoreau
  • It is not the map that counts, it is the creation of the map
  • Even with a map, following the territory is often more important than following the actual map
  • Know the audience for any sort of communication
  • A strategy cannot be complicated else it cannot be called a strategy
  • Are the documented facts correct or are the incorrect facts documented correctly?
  • A plan is only as good as its assumptions and the ability to replan when those change
  • Malicious obedience (or compliance): obeying a boss’s order when they know for a fact it is wrong
  • A good process helps hide poor testers
  • Is a large process designed to seek success or prevent failure?
  • Could it do both?
  • Null Hypothesis
  • Asch experiments
  • They who know the truth are not equal to those who love it, and they who love it are not equal to those who delight in it – Confucius
Posted on July 28, 2008 in Quality by adam2 Comments »

The 4th annual TWST (Toronto Workshop in Software Testing) was held last weekend. This year’s focusing topic was deception and self-deception in software testing. Rather than break down the presentations on a person-by-person basis like I did last year I have broken down my notes by day.

Here are my notes from Day One

  • Self deception is more common than deliberate deception
  • There is often a difference between the corporate ‘standard’ and the corporate ‘practice’
  • A primary form of deception is placating the target audience
  • An intentional deception; intentional withholding of material information
  • Deception awareness dependent on project perspective
  • Arguments that are confident and full of emotion are easier to deceive with than those with details and accuracy
  • Deceptive environments can manifest physically. I can’t stomach this
  • Deception, when used as a buffer could be considered a useful thing
  • However, the biggest lie of them all is ‘I know what is best for you’
  • Testers are not the cops, they are the reporters. Of course, in this metaphor the code is the corpse
  • Not seeing the elephant in the room is different than not acknowledging it
  • Lying is more complicated than telling the truth
  • Visionaries see things differently. Though they may appear to be engaged in self deception they view the world through different filters and so might not be.
  • Things do not have intrinsic purposes. Purpose only occurs when people are involved
  • Self-deception is often just a different take on the observable facts
  • When you exactly hit your numbers week after week, then look for ‘dry labbing’ – Doug Hoffman
  • Are you measuring to control, or measuring to observe?
  • Some things you cannot count. That’s fine, but can you observe it?
  • A great response to ‘I do not know’ is ‘Well, if you did know?’
  • Economic pressure is an interesting factor on whether or not to participate in deception (to keep one’s job for instance)
  • For every degree you purchase, you get an optionally free education
Posted on July 26, 2008 in Quality by adamNo Comments »

Last week at RubyFringe Luke Francl gave a talk called Testing is Overrated. I wasn’t there, but Joey was and blogged about it.

Some highlights of the Luke’s position are

  • Good QA testers are worth their weight in gold.
  • Unit testing finds certain errors; manual testing others; usability testing and code reviews still others.
  • the peer pressure of knowing your code will be scrutinized helps ensure higher quality right off the bat
  • Another huge problem with developer tests is that they won’t tell you if your software sucks
  • no single technique is effective at detecting all defects

All-in-all, it a pretty good primer on why just relying on your build system to tell you whether you can ship or not is a bad idea. I suspect I’ll be using some of this material next month for my talk at Agile.

Posted on July 24, 2008 in Python, Quality, subversion by adamNo Comments »

I’m going to be implementing the buddy system for changes to only part of our subversion repository. Essentially, anything in trunk needs to be buddied which resulted in needing to modify the buddy precommit script to something more robust than bash. This version will let developers work in their own private branch and be able to do lots of little commits without needing to have a buddy session but as soon as they try to merge into trunk they will need to be buddied.

#!/usr/bin/python
""" Make sure that the log message contains a buddy message """

import sys, os

REPOS = sys.argv[1]
TXN = sys.argv[2]
SVNLOOK = sys.argv[3]
care = "false"
found = "false"
projects = ["our_project_1", "our_project_2", "our_project_3", "our_project_4"]

log_stream = os.popen('%s dirs-changed -t "%s" "%s"' % (SVNLOOK, TXN, REPOS))
for line in log_stream.readlines():
    for project in projects:
        if line.lower().find("software_projects/%s/source/trunk" % project) != -1:
            care = "true"
            break
log_stream.close()

if care == "true":
    log_stream = os.popen('%s log -t "%s" "%s"' % (SVNLOOK, TXN, REPOS))
    for line in log_stream.readlines():
        if line.lower().startswith("buddy:"):
            found = "true"
            break
else:
    sys.exit(0)
log_stream.close()

if found == "true":
    sys.exit(0)
else:
    sys.exit("All commits need to have been buddied. Syntax:\nbuddy: buddy name")

It is certainly not an airtight policing solution and can be easily gamed, but it is not supposed to be. It is just a nice little nudge along the direction we want to be going in.

Posted on July 2, 2008 in Quality by adamNo Comments »

I had a bit of an epiphany yesterday if perhaps a bit late. I’ve been concentrating on the standard places and techniques when approaching the task of improving the overall quality of our product(s). Things like mining data logs, checking unit test coverage, flushing out quality driven processes, etc..

But what occurred to me was that our calculators are really just red herrings in terms of priorities. The value that we create for our customers and consumers is not the calculator interface (any dev team could recreate that; some have in fact). The value is in the questions, and their underlying research and equations. Not the software.

Think about it for a second.

  • Google is nothing without the PageRank and AdSense algorithms
  • Idee‘s value is far more than a huge database of photos. Their value is in the algorithms they use to do their visual search magic
  • Points.com‘s real lynch pin on the loyalty program exchange market is not their website or technology stacks, it is their deep integration with the program back end systems which gives them the competitive advantage
  • All not what people think their ‘core’ product is, but each absolutely necessary to it.

    I often use the analogy of an onion (or tree) when describing how software is built. There is often a core set of classes/code to which features and functionality are added on layer by layer. (Unlike onions or trees, the core of software tends to also grow over time but that breaks the analogy so lets ignore that). Testing things at the center of the onion generally produces the greatest bang-for-buck as they are used by the most things. The trick then is identifying the center of the onion.

    So the question I have been rolling around in my head now is have I been testing the right thing? Well…. no, I don’t think so (or at least to an appropriate weighting). And its follow-up, how do I test it? No idea, but I’m about to ask a tonne of questions when I get into the office. And questioning stuff is what testers do best.

Posted on July 1, 2008 in Books by adamNo Comments »

(This was originally written as a review for DDJ but I keep getting caught in a spam filter there so reviews will now show up here)



Since I started reviewing books, I’ve been sent at least one or two a month to read. I have various strategies for managing the queue, but once in awhile a book arrives whose title and concept makes me push it to the top of the pile immediately. Dorota Huizinga and Adam Kolawa’s new book, Automated Defect Prevention, is once such book. It is unfortunate then that I cannot write a glowing review of it.

Rarely is anything as positive or negative as it seems, and this holds true for this book. I thought the size of it was appropriate, as was the choice to publish it as a hardcover. I also liked the layout with each chapter following a similar structure which would make looking up information easier if being used as a reference. I also was impressed by the design section where they laid out their Critical Aspects of Architectural Design. This could easily be turned into a checklist to drive the product’s testing efforts. It should also be noted that Dr. Kolawa is the CEO of Parasoft, a large vendor of testing tools. I am always skeptical when someone in that sort of position writes a book relating to their market for fear of book scale advertising. Automated Defect Prevention does a remarkable job of being vendor and technology neutral throughout and I think Parasoft is only mentioned one or two places, and they were relevant to the topic being discussed.

Those positives aside, there are three things that prevent me from recommending Automated Defect Prevention.

Automated Defect Prevention is a complete software development methodology that leverages automation for, amongst other things, repetitive tasks, organize project activities and tracking project status. The problem with their methodology is that it feels like a philosophical mash-up of Deming, CMMi and Six Sigma. The Six Sigma heritage shows through with every practice having a measurement section even if it sometimes feels unnatural; when discussing requirements and design for instance. CMMi is brought into the mix through the customization of the best practices. This is the equivalent of tailoring a CMMi practice area.

While Automated Defect Prevention does have moments of agility the methodology appears to be advocating the classic waterfall style of project management with its inherit problems. Iterative design is mentioned, as is unit testing, but those are undone by the large upfront test design that occurs both for the unit and functional tests. Unit tests tend to work best when they are evolutionary which is one of the primary benefits the Agile community has given to development world. They also recommend that modules be owned by specific developers which removes the notion of shared code ownership / responsibility and creates knowledge silos and clusters within the organization which can be problematic in my experience.

The main problem I had with this book however is its use of the term Best Practices and its implication that one particular methodology is appropriate for any organization. In software development the existence of Best Practices is a myth. There are certainly plenty practices that ‘Have Worked Well For Others And May Or May Not Work Well For You’ and Automated Defect Prevention even presents a number that I often recommend to people. The problem is that the target audience, principally project managers, graduate and post-graduate students are not likely to have the experience to recognize this and attempt to implement the presented ideas based solely on its inclusion in the book without questioning why they are doing it or whether it is appropriate in their context. The number of companies that adopted Six Sigma (it too was a Best Practice) in the late 90s only to very publicly reject it as a mistake (3M most recently) shows the danger in this.

An excellent title is certainly part of the process in making a book a success. It needs to be backed up by equally excellent content. If you are looking to improve your deliverables in either quality or delivery date through a formal methodology, you would be better served getting a well reviewed book on each CMMi, Six Sigma and Agile and creating a custom one that was designed to work for you than try to implement someone else’s and end up customizing all their Best Practices to fit your context. And that is a shame; nothing appeals to the tester portion of me than the utopian ideal of Automated Defect Prevention.