Posted on February 26, 2007 in Video by adamNo Comments »

Guido did a test run of his keynote for this year’s PyCon for an audience at his employer. Video taped of course. So if you could not attend PyCon, but want to see his keynote (sorta) about what is in Python 3.0, this video is for you. I must say, I’m dissapointed the GIL isn’t going away and surprised it wasn’t mentioned as it comes up on comp.lang.python all the time.

Towards the end, they show the audience and there are under 2 dozen people in attendence. I wonder if that is because he went over time, the python folks at Google were busy that day, or if that was how many people in the entire campus cared about this topic. Hmmm

Direct link here.

Posted on February 25, 2007 in Quality by adam1 Comment »

I just found out that The Toronto College of Technology has picked up my QA102 – Scripting for Software Testing course and rebadged it ‘Scripting for Test Engineers’*. My original proposal was for a 5 week course, but they have it listed as 4 so I’ll have to tweak the content a bit (COM and parallelism is being axed, which is fine as these are pretty advanced). Basically it will break out to 3 weeks of Python and 1 week of Perl/unix/windows.

There may still be room to signup. It took me teaching the QA101 – Software Testing and Quality Assurance to get bumped to the ‘big’ class. Hopefully it takes less time with this one.

All the course materials will be available here as they are completed. Of course, this means I have to have something coherent ready in 6 days. 🙂

* Yes, I know that we cannot really call ourselves ‘engineers’ in Canada, but I didn’t name the course

Posted on February 22, 2007 in Quality by adam1 Comment »

I was cleaning up my laptop when I stumbled on a presentation from 2004 called Front Line Internet Analytics at Amazon.com. Not exactly sure where I found it originally, but I suspect it was probbably from Greg Linden’s blog. There are 31 slides in all, but I would direct your attention to slides 11, 12 and 13. They discuss Amazon’s A/B Testing strategy, the problems that they had to work around and their findings.

For those of you who dislike PDFs as much as I do, here are the slide contents.
(more…)

Posted on February 18, 2007 in Books, Quality by adamNo Comments »

The book PMD Applied got my attention via a Slashdot review. While I have not read it (but would happily if I was sent a copy), it’s arrival is timed well since I mentioned it here as part of generating metrics about your code. It might be worth picking up if you are getting into metrics with your own codebase.

Posted on February 15, 2007 in Quality, Video by adamNo Comments »

James Lyndsay spoke at the Google Conference on Test Automation on his forecast of the future for testing. Its a really good video and short (coming in at just over half an hour) not only for the content as for the slides. Watch for the Schwarzenegger vs. Bush ones. Here were my takeaways.

  • Testing should be considered a strategic activity
  • There is a split between the highly scripted camp and those in the exploratory one. And the rift is getting bigger
  • The difference between what was expected and delivered is about finding the missing value, the different between what was delivered and what was expected is about identifying risk
  • Testers deal with facts, not opinions
  • Types of bugs in a TDD environment
    • Emergent behaviour
    • Integration issues
  • In the near future, a tester without tools is not going to get far. I’ve been saying this to people in my QA class for years; gotta love when opinions are confirmed. Or at least held by someone else.
  • Direct link here.

Posted on February 14, 2007 in Quality, Video by adamNo Comments »

Rik Farrow gave his Security is Broken talk (slides, references) at Google and they naturally taped it and put it online. I’m not sure what I was hoping it was about, but it ended up being a discussion about how the security model at the OS level is basically fundamentally wrong. Well worth the view if you are interested in ACLs, Capabilities, etc. but notso worth it from a testing perspective. There were a couple things of not:

  • You cannot trust users to do the right thing, you have to do it for them – so no prompting users with a dialog that asks them to choose the secure or insecure route; just take the secure one
  • Build your applications is small modules that do specific things with the least amount of priviledge – Use the postfix philosophy and not the Vista philosophy (installers run with the highest level of priviledge — how long until an installer 0wns someone’s system?)
  • Browsers are a big problem because they are designed to run external code

Direct link here.

Posted on February 13, 2007 in Books, Quality by adamNo Comments »

The Upside of Down: Catastrophe, Creativity and the Renewal of Civilization by Thomas Homer-Dixon was mentioned this weekend in an article in the paper in the context of how global warning is the new religion (with Al Gore and Bono as its figure heads no less). Ignoring the whole Florida-is-going-to-be-under-water thing for a second, there is a concept explored in it that is very relevant to testing; especially to those on the fringes of the Context Driven school of thought. It is this:

The Prospective Mind: an intellectual stance that is proactive, anticipatory, comfortable with change, and not surprised by surprise.

To me, this nicely encapsulates both exploratory testing and a decent chunk of Agile methodologies.

Posted on February 12, 2007 in Quality by adamNo Comments »

I can’t remember where, but I recently stumbled on a paper by James Bach and Patrick J. Schroeder called Pairwise Testing: A Best Practice That Isn’t. It has a really nice breakdown of the problems of testing combinatorial problems and the “best practices” of how to test them. The quotes are of course necessary because this is by James who is one of the leaders of the No Best Practices fight. I say that only slightly facetiously as I have not really bought into that notion completely. At least not yet. Anyways, back to the paper. James and Patrick nicely rip apart pairwise testing as a best practice and quite convincingly put it in the ‘one tool in a toolbox of combinatorial testing techniques’ category.

What I liked best about the paper however is neither the discussion on pairwise testing nor the best practices debunking. Instead it was how they ended the paper, which I reproduce here.

We recommend that you refuse to follow any technique that you don’t yet understand, except as an experiment to further your education. You cannot be a responsible tester and at the same timedo a technique just because some perceived authority says you should, unless that authority is personally taking responsibility for the quality of your work. Instead, relentlessly question the justification for using each and any test technique, and try to imagine how that technique can fail. This is what we have done with pairwise testing.

Don’t follow techniques, own them.

Achieving excellence in testing is therefore a long-term learning process. It’s learning for each of us, personally, and it’s also learning on the scale of the entire craft of testing. We the authors consider pairwise testing to be a valuable tool, but the important part of our story is how we came to be wiser in our understanding of the pairs technique, and came to see its popularity in a new light.

We believe testers have a professional responsibility to subject every technique they use to this sort of critical thinking. Only in that way can our craft take its place among other respectable engineering disciplines.

I especially like the first sentance. Maybe this is why I enjoy getting sucked down technical rabbit holes (much to the annoyance of my boss sometimes…).

Posted on February 12, 2007 in Books, Quality, Video by adamNo Comments »

Esther Derby, along with Diana Larson recently released a book on Agile Retrospectives: Making Good Teams Great by Diana Larsen, Esther Derby. They subsequently took their show on the road to Google as one of their Tech Talks and is the subject of today’s video.

The video is pretty good and you can tell that they are used to doing similar presentations which is refreshing from some people that clearly are uncomfortable being in front of people / cameras. From a content perspective, I tuned out somewhat at the 37 minute mark when they started trading stories about favorite and least favorite retrospectives they had participated in. The gist of the presentation seems to be that the Agile kids have all sorts of checks and balances in place to rapidly detect and correct errors in the code, so why not have the same sort of thing for the methods, processes and teams which implement the code. Seems pretty logical.

Anyhow, the framework they propose for running a retrospective consists of five phases

  1. Set the stage
  2. Gather data
  3. Generate insights
  4. Decide what to do
  5. Recap and close

Some unrelated things from the presentation extraneous to the core content

  • They pair-wrote the book, meaning they only wrote something when in the same room and using one keyboard
  • Google employs sign-language interpretors for their presentations (she is sitting to the right of the podium in some shots). I have yet to see that listed as one of Google’s elaborate perks, but is pretty cool.

Direct link here.

Oh, and my review of Esther’s other books is here.

Posted on February 9, 2007 in Quality by adam1 Comment »

Michael Bolton gave a list of things developers could do to make their code more testable recently on the Agile Testing list. I can usually remember 2 or 3 things like this when asked about this, so this is a much better list than I have come up with so far.

  • scriptable interfaces to the product, so that we can drive it more easily with automation
  • logging of activities within the program
  • monitoring of the internals of the application via another window or output over the network
  • simpler setup of the application
  • the ability to change settings or configuration of the application on the fly
  • clearer error/exception messages, including unique identifiers for specific points in the code, or WHICH damn file was not found
  • availability of modules separately for earlier integration-level testing
  • information about how the system is intended to work (ideally in the form of conversation or “live oracles” when that’s the most efficient mode of knowledge transfer)
  • information about what has already been tested (so we don’t repeat some else’s efforts)
  • access to source code for those of us who can read and interpret it
  • improved readability of the code (thanks to pairing and refactoring)
  • overall simplicity and modularity of the application
  • access to existing ad hoc (in the sense of “purpose-built”) test tools, and help in creating them where needed
  • proximity of testers to developers and other members of the project community

Off the top of my head, I would add

  • a ‘stub mode’ where you can test a module without needing another module reading / working
  • information about what has changed since the last code delivery in order to better target testing

I’m sure there is more.

Oh, and while we’re quoting Michael, in a post a couple weeks ago about The Dead Bee Heuristic.

I think what you’re trying to say is that if a bug is not repeatedly demonstrable, then it’s unclear as to whether an attempt at a fix will be successful. We call this the Dead Bee heuristic: when you’re inside a tent, and you think you’ve killed the bee, you want to see the body before you can sleep happily.

I’m starting to think that thinking in heuristics is easier with clear, non-techy examples like this.

Next Page »