Posted on August 31, 2008 in Quality by adamNo Comments »

I just realized I had a couple issues of trade rags sitting around. Here are the things that caught my eye in the Software Test & Performance ones. Aside from the really bad author photos that seem to be used and that I could write some of these articles (and I don’t work for a tool vendor or am selling consulting services).

STP – June 2008

  • Slipping Into Scrum by Rob Sabourin
  • Covered in Java by Mirko Raner is a nice introduction to coverage (by a parasoft employee)

STP – July 2008

  • Coverity’s thread analyzer tool
  • 5 Fatal Flaws of Estimating Tests (by L.R.V Ramana)
    • Asking the Wrong Questions
    • Arranging Activities Illogically
    • Failure to Understand the Multiplier Effect
    • Inconsistencies in Measurement Criteria
    • Not Compensating for Risks and Dependencies

STP – August 2008

  • Talend Open Profiler looks seriously cool. Too bad I’m currently in Rails-land and can’t really use this as well as I might be able to do if it wasn’t for ActiveRecord.
  • There is a decent article on Maven and its role in testing, but I would point you here, here, here, and here instead.
  • Jason Cohen is promoting his book with an article called ‘Think You Have A Team? You Don’t.’ I haven’t read the book yet (it is in the pile) but the article is pretty good. Including choice tidbits like Peer code review not only finds bugs, it creates an environment where developers work together instead of in parallel.

STP – September 2008

This issue is Windows focused which is something I haven’t really be involved in for 3 years, so it should be no surprise if there is not a lot I find interesting. If you are a .NET shop, your milage will be much better.

  • tagline from a Gomez ad: Just because your infrastructure survived the load test doesn’t mean the customer experience did
  • The ‘Construct A Data Framework For Seamless Testing’ by Vladimir Belorusets is pretty good for the Toolsmiths out there, or anyone else building their own custom framework.
Posted on August 29, 2008 in Quality by adamNo Comments »

I play recreational Lacrosse which is not that unique an event where I live. What is somewhat unique is that I did not play as a child. Growing up I played baseball from t-ball all the way until I was 16. This means that I do not have the thousands of hours of muscle memory and skill as other players on my team.

One such player this fall is the captain of the Brooklin Redmen; the local Major Series team. He is good. Scary good actually.

I mentioned this to a coworker, he pointed out that sport holds all sorts of these gaps in skills:

  • Never done it before
  • Purely recreational
  • Competitive recreational
  • Semi-professional
  • Professional

Between each of these categories is a significant jump in the skill of the participants and the level of work needed to achieve the next level. In lacrosse I would fall into the ‘Purely recreational’ category. In order to play in the ‘Competitive recreational’ level I would have to start eating (far) healthier, start a running regime designed for both bursts and longer time period stamina and spend a lot more time with a stick in my hand.

I’ve now been doing QA / Testing for 10 years now and consider myself in the ‘Professional’ category. What strategies do I think worked in my favor in making the jump between levels?

  • start writing – writing requires thought and thinking is a key skill of testing
  • join a community – being part of a broader community gives you a venue to experiment with thoughts and provides a sounding board for questions
  • start teaching – there is a big difference between ‘knowing’ a concept and having internalized it enough to be able to communicate it effectively to someone who does not have it figured out yet
  • bite off way too much – I learned unix by formatting my windows partition which meant that if I wanted a working computer I needed to get it working (took a bit, but I did). I’ve done the same thing in testing over, and over from my first day at my first job (‘Sure, I can write you a database that can do that…’).

As always, where are you in the progression, and are you willing to do what is necessary to move up. If of course you want to that is.

Posted on August 27, 2008 in Quality by adamNo Comments »

The Danish philosopher Soren Kierkegaard was quoted on the newest Hugh and the Rabbi and I immediately thought of the unceasing battle against ‘best practices’. Here is the Kierkegaard quote:

Life can only be understood backwards, but it must be lived forward.

While the quote itself is good, the commentary surrounding it (I think from Johnnie Moore, I don’t know their voices) frames the problem even better:

… so many efforts where we see what happened in the past, turn it into a model and project it merrily into the future and don’t pay any attention to the fact that things change, people change, people are different, every context is different.

Seems like a pretty good framing of why Best Practices are myths.

Posted on August 25, 2008 in Quality by adamNo Comments »

JR has a recent posting about assistants which got me thinking about the concept. In a software shop, I have seen three different ‘types’ of assistant.

  1. The office mom/dad – Makes sure the developers have coffee, birthdays are recognized etc. Very important with start-ups where there is not a lot of structure / policies which force the organization to flow on its own inertia and there is a lot of work keeping people (too) busy
  2. The classic assistant – Makes appointments / juggles the lives of the people they are assisting. Our CEO and one of our Directors share one and she is constantly herding them from one speaking engagement to a sales call to a whatever
  3. The unofficial assistant – This one is a bit of a stretch, but much like we are all customers we are also all assistants. For instance, even though there is a direct reporting line between myself and my boss (Hi Mike) I take a number of tasks on that don’t technically fall under my job description (well, they might as I have a pretty vague one) to assist him with his queue. Similarly he will take some things from my queue since he is better connected politically within the organization. Another example would be when I change our product’s administration layout for another department. I don’t work for that department, but I’ll assist them where I can.

If you have ever witnessed an effective type 1 or 2 assistant in action you can appreciate their value. I would argue that the third is exponentially of more import though. The trick is to identify who you assist and who assists you so those relationships can be leveraged to your and their benefit.

Posted on August 24, 2008 in BITW, Quality by adamNo Comments »

Different companies react differently (which makes sense) to Bug In The Wild reports. Too often you hear about reports in a security context where companies have responded with a pack of lawyers. EA however has recently illustrated how to capitalize brilliantly to a BITW.

I originally found this on Neat-o-rama where someone in the comments suggested that the original report might have been faked to setup the response, and I can certainly see that happening. But it is too good a concept to let that overshadow it.

Posted on August 21, 2008 in Quality by adamNo Comments »


Scott Barber has posted a recent column about avoiding the center of the universe syndrome. The crux of the article is towards the end:

When all is said and done, in many organizations the test team is no more at the center of the universe than the Earth’s moon. Think about it. In your team, does the test team (the moon) orbit the development team (the Earth), which is guided by the gravity of the business (the sun), which in turn is weaving a path through the universe of business, finance, and competitive pressures?

So what celestial body are testers then? I would argue that the a comet would fit the metaphor well. Why?

  • They are controlled by the sun (the business)
  • They have different orbiting period (some longer than others)
  • They originate from different places (development, tech support, other)
  • They leave stuff (debris technically) behind (hopefully it isn’t bad debris)
  • They can have unexpected results on things they are near (again, hopefully in a positive way)

Wikipedia has, to not much surprise, a great article on comets for the amateur astronomy buffs.

Posted on August 15, 2008 in Housekeeping by adamNo Comments »

On July 15, 1997 I registered the domain ‘’ which I used regularly for about 10 years. I’ve since migrated everything to ‘’ which is a little more professional and overall think I will use longer-term. I was going to just let it lapse, but was approached by someone who wanted to buy it. I figure if Brian Marick can part ways with, which is a much better domain, I can do so as well. will live on, but I won’t be the one covering it with ads (which I suspect is what is going to happen).

This then is the official ‘change your bookmarks’ for anyone who has the old domain stored somewhere.

Posted on August 12, 2008 in CAST2008, Quality by adamNo Comments »

I didn’t attend too many sessions at this year’s CAST; in fact I only have notes on 2. (I was in a meeting in the office for another, presented in two others and was busy being ridiculously nervous during a third.) I know enough people in the community now that conferences like CAST are not really about the sessions anymore but about the conversations in the hallways and at meals about testing.

One session I did attend was about testing software written by, and for scientists (something Greg has me keeping an eye on) by Diane Kelly and Rebecca Sander.

  • Scientists don’t usually know the software engineering terminology and software experts don’t (usually) have Ph.d’s
  • One medical device programmer would actually get scrubbed in for surgeries to see how their device was used. (Yet another reason why I won’t test medical software)
  • Knowledge exchange is critical for successful testing
  • Oracles used by scientists
    • Professional judgement
    • Data based
    • Benchmarks (relative to output of other algorithms)
  • The code irrelevant in terms of correctness to the model
  • Which leads to scientists being unconcerned with the quality of the code
  • Usability is done through documentation rather than through testing with the aim to improve
  • Theory Risks > Usability > Code RIsks
  • Testing strategies in one [scientific] domain do not necessarily apply to others
  • Scientists cannot separate the model from the code that produced it. Soooooo, if you test the model you have tested the code.
  • Scientists and developers tend to not trust each other anymore

Kinda makes you wonder about all the science we rely on. You can read the full paper for all their research and findings.

Posted on August 11, 2008 in Quality by adam1 Comment »

I never got around to making an accompanying slide deck for the the presentation that never was, here are the notes that I would have created it from. This is what I originally submitted to the Agile selection committee.

Since the rise of the *Unit frameworks, the number of tools which can be incorporated into a product’s build has increased at a rapid rate. All these tools, be they for style analysis, source code idiom detection or bytecode security patterns all serve one purpose; to answer a question about the build.

But are you asking it to answer the right question? And does that answer raise a different question? This presentation will look at the common types of build analysis tools and discuss which questions they should be used to answer. This often is different than the questions they are used to answer.

Here are the notes for the individual sections.

  • Cult of the Green Bar
    • Too often when people see that the IDE or CI server is ‘running green’ it is interpreted as being ‘good to test’. But does it really mean that? Really? It was a trick question…
    • The green-means-go idea is now so engrained in the market it was specifically mentioned in a video the folks at PushToTest did at Google.
    • Belief in the green bar can fail you:
      • when the tests do not test the conditions that are the ones that are failing in production
      • the tests that really need to be run are not able to be tested in a simplistic manner
      • tests are just plain missing
    • So what does the green bar tell you? That the existing tests all ran in the expected manner
  • Coverage
    • Coverage measurement if often used as justification for the Cult of the Green Bar
    • But, but, but, the bar is green AND we have 97.635% coverage. It MUST be ready to ship
    • The term coverage is vague at best. Wikipedia has five different types of coverage
      • function
      • statement
      • branch
      • path
      • entry/exit
    • Coverage provided by tests that lack context is just a number
    • The only think coverage tells you is where are we at this point in time. This in turn lets you ask whether or not any change (or lack of change) is ok
  • Static Analysis
    • Static analysis tools try to add context to your tests
    • but…
      • there might be a bug in the tools
      • the tool(s) might not know your programming paradigms
      • high false positives or lack of results tuning might cause people to ignore errors that they should actually be paying attentions too (wheat to chaff ratio)
      • you might not care about the errors that the tools returns
      • the tools of course only know what they have been told to know about
      • some tools are picker than other; are they more or less picky than you?
  • Complexity
    • defined as the number of linearly independent paths through the code
    • common practice is to target functions/methods with high complexity first
    • there is (of course) some debate on whether or not this practice makes any sense or is just another useless number to confuse us
    • start to look at code that is giving a complexity of > 35
  • Dependency – Which is better? 1 bad dependency or 3 good ones?
Posted on August 9, 2008 in Quality by adam2 Comments »

In addition to marketing, I think the other area testers could learn a lot from is coaching and to some degree sports theory / psychology in general. Going through my recent notes, I have marked these as interesting.

On the July 23 edition of Prime Time Sports there was a discussion around how to fix Basketball Canada. Two quotes stood out.

  • Doug Smith – We need people to go out and teach the coaches how to coach
  • Jack Armstrong – On a day-to-day basis, the people who are coaching these young people around the country are Canadian high-school coaches, Canadian university coaches, Canadian grade-school coaches and those are the people who have to be developed, that have to be brought along in terms of their teaching skills, their expectations, what they expect, what they demand and how they go about it. …
    What are you doing to develop the coaches at every level across the country?

If you substitute ‘coach’ with ‘instructor’ or ‘teacher’ or ‘professor’ and change the context to teaching testing then I think you have a fair assessment of the main challenges facing the craft right now.

The other thing I have marked is a quote from Roz Savage who is currently about 2/3 of the way through rowing from San Francisco to Hawaii and doing podcasts via satellite phone with TWiT. In Roz Rows episode 11 she talks about about how she doesn’t focus on the larger goal.

Don’t always focus on the goal because sometimes the goals can seem so far away and in fact the only way you will get tot he goal is to by focusing on what you have to do in the present moment. Just focus on the process.

Having been overwhelmed in the past by ‘insurmountable’ testing problems before, this is pretty good advice. Break the larger problem into smaller ones and then ignore the big one. Concentrate on the small things that are actually achievable. Got a day and a half to test a release? Overwhelming. Got a day and a half to test 6 bug fixes? Achievable.

Next Page »