Posted on August 21, 2008 in Quality by adamNo Comments »


Scott Barber has posted a recent column about avoiding the center of the universe syndrome. The crux of the article is towards the end:

When all is said and done, in many organizations the test team is no more at the center of the universe than the Earth’s moon. Think about it. In your team, does the test team (the moon) orbit the development team (the Earth), which is guided by the gravity of the business (the sun), which in turn is weaving a path through the universe of business, finance, and competitive pressures?

So what celestial body are testers then? I would argue that the a comet would fit the metaphor well. Why?

  • They are controlled by the sun (the business)
  • They have different orbiting period (some longer than others)
  • They originate from different places (development, tech support, other)
  • They leave stuff (debris technically) behind (hopefully it isn’t bad debris)
  • They can have unexpected results on things they are near (again, hopefully in a positive way)

Wikipedia has, to not much surprise, a great article on comets for the amateur astronomy buffs.

Posted on August 12, 2008 in CAST2008, Quality by adamNo Comments »

I didn’t attend too many sessions at this year’s CAST; in fact I only have notes on 2. (I was in a meeting in the office for another, presented in two others and was busy being ridiculously nervous during a third.) I know enough people in the community now that conferences like CAST are not really about the sessions anymore but about the conversations in the hallways and at meals about testing.

One session I did attend was about testing software written by, and for scientists (something Greg has me keeping an eye on) by Diane Kelly and Rebecca Sander.

  • Scientists don’t usually know the software engineering terminology and software experts don’t (usually) have Ph.d’s
  • One medical device programmer would actually get scrubbed in for surgeries to see how their device was used. (Yet another reason why I won’t test medical software)
  • Knowledge exchange is critical for successful testing
  • Oracles used by scientists
    • Professional judgement
    • Data based
    • Benchmarks (relative to output of other algorithms)
  • The code irrelevant in terms of correctness to the model
  • Which leads to scientists being unconcerned with the quality of the code
  • Usability is done through documentation rather than through testing with the aim to improve
  • Theory Risks > Usability > Code RIsks
  • Testing strategies in one [scientific] domain do not necessarily apply to others
  • Scientists cannot separate the model from the code that produced it. Soooooo, if you test the model you have tested the code.
  • Scientists and developers tend to not trust each other anymore

Kinda makes you wonder about all the science we rely on. You can read the full paper for all their research and findings.

Posted on August 11, 2008 in Quality by adam1 Comment »

I never got around to making an accompanying slide deck for the the presentation that never was, here are the notes that I would have created it from. This is what I originally submitted to the Agile selection committee.

Since the rise of the *Unit frameworks, the number of tools which can be incorporated into a product’s build has increased at a rapid rate. All these tools, be they for style analysis, source code idiom detection or bytecode security patterns all serve one purpose; to answer a question about the build.

But are you asking it to answer the right question? And does that answer raise a different question? This presentation will look at the common types of build analysis tools and discuss which questions they should be used to answer. This often is different than the questions they are used to answer.

Here are the notes for the individual sections.

  • Cult of the Green Bar
    • Too often when people see that the IDE or CI server is ‘running green’ it is interpreted as being ‘good to test’. But does it really mean that? Really? It was a trick question…
    • The green-means-go idea is now so engrained in the market it was specifically mentioned in a video the folks at PushToTest did at Google.
    • Belief in the green bar can fail you:
      • when the tests do not test the conditions that are the ones that are failing in production
      • the tests that really need to be run are not able to be tested in a simplistic manner
      • tests are just plain missing
    • So what does the green bar tell you? That the existing tests all ran in the expected manner
  • Coverage
    • Coverage measurement if often used as justification for the Cult of the Green Bar
    • But, but, but, the bar is green AND we have 97.635% coverage. It MUST be ready to ship
    • The term coverage is vague at best. Wikipedia has five different types of coverage
      • function
      • statement
      • branch
      • path
      • entry/exit
    • Coverage provided by tests that lack context is just a number
    • The only think coverage tells you is where are we at this point in time. This in turn lets you ask whether or not any change (or lack of change) is ok
  • Static Analysis
    • Static analysis tools try to add context to your tests
    • but…
      • there might be a bug in the tools
      • the tool(s) might not know your programming paradigms
      • high false positives or lack of results tuning might cause people to ignore errors that they should actually be paying attentions too (wheat to chaff ratio)
      • you might not care about the errors that the tools returns
      • the tools of course only know what they have been told to know about
      • some tools are picker than other; are they more or less picky than you?
  • Complexity
    • defined as the number of linearly independent paths through the code
    • common practice is to target functions/methods with high complexity first
    • there is (of course) some debate on whether or not this practice makes any sense or is just another useless number to confuse us
    • start to look at code that is giving a complexity of > 35
  • Dependency – Which is better? 1 bad dependency or 3 good ones?
Posted on August 9, 2008 in Quality by adam2 Comments »

In addition to marketing, I think the other area testers could learn a lot from is coaching and to some degree sports theory / psychology in general. Going through my recent notes, I have marked these as interesting.

On the July 23 edition of Prime Time Sports there was a discussion around how to fix Basketball Canada. Two quotes stood out.

  • Doug Smith – We need people to go out and teach the coaches how to coach
  • Jack Armstrong – On a day-to-day basis, the people who are coaching these young people around the country are Canadian high-school coaches, Canadian university coaches, Canadian grade-school coaches and those are the people who have to be developed, that have to be brought along in terms of their teaching skills, their expectations, what they expect, what they demand and how they go about it. …
    What are you doing to develop the coaches at every level across the country?

If you substitute ‘coach’ with ‘instructor’ or ‘teacher’ or ‘professor’ and change the context to teaching testing then I think you have a fair assessment of the main challenges facing the craft right now.

The other thing I have marked is a quote from Roz Savage who is currently about 2/3 of the way through rowing from San Francisco to Hawaii and doing podcasts via satellite phone with TWiT. In Roz Rows episode 11 she talks about about how she doesn’t focus on the larger goal.

Don’t always focus on the goal because sometimes the goals can seem so far away and in fact the only way you will get tot he goal is to by focusing on what you have to do in the present moment. Just focus on the process.

Having been overwhelmed in the past by ‘insurmountable’ testing problems before, this is pretty good advice. Break the larger problem into smaller ones and then ignore the big one. Concentrate on the small things that are actually achievable. Got a day and a half to test a release? Overwhelming. Got a day and a half to test 6 bug fixes? Achievable.

Posted on August 4, 2008 in Quality by adamNo Comments »

To continue the thread that test triage and bug reports could learn from marketing and selling is a link I found somewhere (can’t remember where exactly) about the Theories about Persuasion. For those too lazy to click the original link, through the magic of copy-and-paste here is the guts of the content.

Posted on August 3, 2008 in Podcasts, Quality by adam3 Comments »

I have been interested in how marketing relates to testing before and this morning I listened to another podcast which will was about creating advertising that sticks (using power lines) but could also apply to when you are writing bug reports or arguing in a bug meeting.

It is no fluke that Steve Cone was on IT Conversations as it appears that is a required step these days when pushing a book. The one that Steve is promoting is Powerlines: Words That Sell Brands, Grip Fans, and Sometimes Change History

  • Sound is the strongest sense in term of memory
  • A visual representation is not as strong as hearing something
  • It’s not the words itself, it is how you deliver / pause / repeat something for added retention
  • Rhyme, cadence and inflection are important
  • In visual media, the tagline should be the headline
  • Don’t change your taglines as it creates brand confusion
  • The best lines are pleasing, upbeat, true
  • Slogans that come alive have some personality and some attitude
  • ‘I’ is stronger than ‘we’
  • A slogan is a political expression
  • A tagline is a trademarked expression for commercial purposes
  • A motto is a description of an organization or belief
  • A jingle is a slogan or tagline put to music (and is more effective when it is unique rather than licensed work)
  • The best ones also are a product of individual inspiration, not committee or group
  • rules for creating powerlines:
    • Say how you are different
    • Personality and attitude, again.
    • Be everywhere with the line / unique selling proposition
    • Claim or promise not easily duplicated (ultimate-er driving machine?)
Posted on July 31, 2008 in Quality by adamNo Comments »

On Monday I sat in on a webinar held by Martin Fowler and Jez Humble; both of ThoughtWorks. Like most of these it was supposed to be a thinly veiled advertisement for Cruise which is their new release management application built on top of CruiseControl, however, they had so many slides (34 for an hour long talk) that they go through the technical stuff and missed most of the marketing stuff.

Here are my notes

  • Release Management is all about making deployment a non-event
  • Cruise, and RM tools in general are a continuing evolution of better people interaction processes
  • Two main principles of Lean are JIT (just in time) and auto-nomination (Stop the line!)
  • Implementing Lean Software Development by Mary Poppendieck
  • Lean is all about finding constraints in your system and removing them
  • Seven principles of Lean
    1. Eliminate waste
    2. Build quality in
    3. Amplify knowledge
    4. Defer commitment
    5. Deliver fast
    6. Respect people
    7. Optimise the whole
  • It is not done until it is delivering value
  • How long does it take to deploy a single line of code? And is that deployment a repeatable process?
  • Make irreversible decisions at the last responsible moment
  • Make decisions easy to reverse
  • If something hurts, do it more
  • Build your binaries only once
  • Separate configurations from your binaries
  • Automation (of processes) has diminishing returns
  • During polls of the audience there was a large trend towards already having things automated (confirmation bias of the content?)
  • Efferent and afferent code couplings
  • The numbers produced by (performance) metrics might not be useful, but the trends tend to be

And here are the slides

Posted on July 29, 2008 in Quality by adamNo Comments »

Here is Day Two’s notes from TWST4 – Deception and Self-Deception. If you missed them yesterday, here are Day One’s notes.

  • Testing is a skill of the mind, not a software tool that glows red or green with news of the trivial – James Bach
  • The narcotic comfort of the green bar. Note that I didn’t say ‘addictive’; I said ‘narcotic’ – Michael Bolton
  • Giving something a name does not make it that something. Just because it is called a unit test does not mean that it is a unit test
  • Is it really a walkthrough or is it a walkaround? Or a walkover?
  • How does one stop playing deception games with ‘smart’ builds
  • Are people trying to build good code, or keep their job?
  • We shape our tools; thereafter our tools shape us. – Marshall McLuhan
  • We are tools of our tools – Henry David Thoreau
  • It is not the map that counts, it is the creation of the map
  • Even with a map, following the territory is often more important than following the actual map
  • Know the audience for any sort of communication
  • A strategy cannot be complicated else it cannot be called a strategy
  • Are the documented facts correct or are the incorrect facts documented correctly?
  • A plan is only as good as its assumptions and the ability to replan when those change
  • Malicious obedience (or compliance): obeying a boss’s order when they know for a fact it is wrong
  • A good process helps hide poor testers
  • Is a large process designed to seek success or prevent failure?
  • Could it do both?
  • Null Hypothesis
  • Asch experiments
  • They who know the truth are not equal to those who love it, and they who love it are not equal to those who delight in it – Confucius
Posted on July 28, 2008 in Quality by adam2 Comments »

The 4th annual TWST (Toronto Workshop in Software Testing) was held last weekend. This year’s focusing topic was deception and self-deception in software testing. Rather than break down the presentations on a person-by-person basis like I did last year I have broken down my notes by day.

Here are my notes from Day One

  • Self deception is more common than deliberate deception
  • There is often a difference between the corporate ‘standard’ and the corporate ‘practice’
  • A primary form of deception is placating the target audience
  • An intentional deception; intentional withholding of material information
  • Deception awareness dependent on project perspective
  • Arguments that are confident and full of emotion are easier to deceive with than those with details and accuracy
  • Deceptive environments can manifest physically. I can’t stomach this
  • Deception, when used as a buffer could be considered a useful thing
  • However, the biggest lie of them all is ‘I know what is best for you’
  • Testers are not the cops, they are the reporters. Of course, in this metaphor the code is the corpse
  • Not seeing the elephant in the room is different than not acknowledging it
  • Lying is more complicated than telling the truth
  • Visionaries see things differently. Though they may appear to be engaged in self deception they view the world through different filters and so might not be.
  • Things do not have intrinsic purposes. Purpose only occurs when people are involved
  • Self-deception is often just a different take on the observable facts
  • When you exactly hit your numbers week after week, then look for ‘dry labbing’ – Doug Hoffman
  • Are you measuring to control, or measuring to observe?
  • Some things you cannot count. That’s fine, but can you observe it?
  • A great response to ‘I do not know’ is ‘Well, if you did know?’
  • Economic pressure is an interesting factor on whether or not to participate in deception (to keep one’s job for instance)
  • For every degree you purchase, you get an optionally free education
Posted on July 26, 2008 in Quality by adamNo Comments »

Last week at RubyFringe Luke Francl gave a talk called Testing is Overrated. I wasn’t there, but Joey was and blogged about it.

Some highlights of the Luke’s position are

  • Good QA testers are worth their weight in gold.
  • Unit testing finds certain errors; manual testing others; usability testing and code reviews still others.
  • the peer pressure of knowing your code will be scrutinized helps ensure higher quality right off the bat
  • Another huge problem with developer tests is that they won’t tell you if your software sucks
  • no single technique is effective at detecting all defects

All-in-all, it a pretty good primer on why just relying on your build system to tell you whether you can ship or not is a bad idea. I suspect I’ll be using some of this material next month for my talk at Agile.

« Previous PageNext Page »