Posted on February 9, 2009 in Books by adamNo Comments »

One of the reasons I think Agile practices are successful is their handling of the requirements and planning phases of a project. Or more correctly, just doing away with it on a grand scale and dealing with them on small, manageable scale. Until Gojko Adzic’s Bridging the Communications Gap I hadn’t had anything more than just my own anecdotes and experiences to back it up.

But the how discussed isn’t nearly as important as the why which is the main point of the book. In Gojko’s view, the reason for most project failures is a breakdown in communication between stakeholders and the development team.

Involving developers and testers from the start, communicating business goal to everyone and removing communication obstacles is the way we can take control of projects, and not leave success or failure to pure chance

Using that as a springboard he explains how to achieve the communications breakthrough and resulting project success through the creation of Agile Acceptance Tests during Specification Workshop(s) which take place at the beginning of each iteration. Both of those topics are discussed at length and I can see teams starting to implement both practices based on the book’s content.

Parallel to this is another thread around communication between stakeholders which I think is even more important than acceptance testing and specifications workshops. It is also one that could be applied by groups still in a rigid Waterfall or Wagile environment.

Describing how and what but not why left the success of the project to pure chance.

This was the big take-away for me, especially given what I have been doing at work recently. People following me on twitter have seen be complain about JQuery and widgets over the last couple weeks. That is because we had a widget produced for us by an outside company, and they delivered exactly what we asked for (or what I think we asked for at any rate). However, it does not do it in a way that lets us achieve our long-term goals for the product so I’ve had to engineer some major infrastructure into the codebase to allow to easy per-client customization. I don’t blame them, I blame us for that as it was the communications gap that caused it. Had we done a better job explaining the why of the project and its business need I think the project would have turned out quite different. (Though I wouldn’t have learned JQuery — which wouldn’t have been so bad now that I think about it.)

Bridging the Communications Gap is not a perfect book though. Parts 1 and 2 were excellent and I found lots of interesting material in them, but Part 3 I blew through in a couple minutes. If you have a bit of exposure to Agile planning will also likely be able to skim through it, but I can see how their might be value to someone completely new to Agile. Part 4 explains the pros and cons of acceptance testing from the perspective of business analysts, testers and developers and seemed a bit long. The content was basically a rephrase of parts 1 and 2 but aimed at each audience so one sentence would have been enough instead of a couple paragraphs per point.

I’ll admit to not having read many books on trying to tackle the Requirements Problem which I now see as the Communications Problem. I’ve been dealing with requirements for almost 11 years now from a testing perspective and the approaches and concepts presented in Bridging the Communications Gap are the best I have seen with trying to rein in this beast. I’ve recommended it to two people I’ve taught so far and if you find you are having issues determining or delivering what the customer really wanted, I recommend it to you too.

Posted on January 19, 2009 in Books by adam1 Comment »

The book-of-the-month in the testing community right now seems to be Outliers by Malcolm Gladwell. I’m sure I’ll read it eventually, but a more important book I think was released around the same time. That book is Tribes by Seth Godin.

As you might tell from my recent burst of leadership and marketing related posted, I think to have the most career success possible you need to step up and assume a leadership role. This does not mean leaving your day-to-day testing position for the meetings and bureaucracy of management, but one of community, group or tribal leadership.

A tribe is a group of people connected to one another, connected to a leader, and connected to an idea.

The key part of that definition is ‘connected to a leader’. People connected to an idea is not enough, and that is where Tribes comes in. Tribes lays out the why and to a certain degree the how of tribe leadership. There are no ‘chapters’ per se, but around 115 or so little topics stung together to form a nice narrative which make it great commuting reading or for taking a break between tasks. Here is a sample of those headings

  • Why should you lead? And why now?
  • Fear of failure is overrated
  • Participating Isn’t Leading
  • The difference between things that happen to you and things you do
  • Sheepwalking
  • The elements of leadership
  • Positive deviants

Each of those topics has one or two great points about leadership in it, often backed up with a quick story, often from Seth’s own experience. This humanizes the topic in way fictional cases cannot.

I liked Tribes. A lot.

I learn something from every book I read, though most of the time it is just ‘stuff’ that I tuck away for later use. Tribes had the wheels of my brain were whirling the whole time while I was reading it. What tribes (vs. groups) do I belong to? Who are their leaders? Is the leader who I think it is? Am I a tribe leader? Am I supposed to be? I self-identify with the Context-Driven school of testing, though it could quite easily be relabeled as a tribe as it has all three parts of one. I think this blog and its readership might also be a tribe to a certain degree; one whose leadership I inherited through creation. Now to lead it. What other groups do I know that could transform into tribes with some leadership? Is my role to assume that leadership, or help others to do so?

My only complaint with Tribes is the format, as usual. At 21.2 x 13.2 x 1.2 cm and only 150 pages, it is a small book but the decided to make it a hardcover with dust jacket. I understand there is a certain prestige of hardcover over softcover, but that no doubt adds to the cost. Amazon is selling it for $11.99 right now, but would likely have it below the magic $10 threshold in a different format.

And of course, a book about tribes wouldn’t be complete without a tribe behind it. This tribe has been, and continues to be a busy one. First there was the Tribes Casebook which was followed up by the Tribes Q & A ebook. Both of these nicely compliment the book and further illustrates the power of a tribe.

Tribes is a big book packed in a small format and absolutely deserves a place in your bookshelf. It could also have been the most-important but overlooked non-testing book for testers of 2008.

Posted on December 8, 2008 in Books by adamNo Comments »

I think either code reviews or static analysis is where the next evolution of testing processes is going to happen. The main problem facing code reviews is that they typically require at least one senior programmer who has seen a lot of code. This is largely because our schools focus on producing people who are more concerned about whether the code works at time of grading rather than on how nice the actual code is. Lets not even think about the long-term maintainability of it. And of course there are those of us who learned by hacking things together as teenagers; we’re a hopeless bunch. Robert C. Martin’s (Uncle Bob) Clean Code could very well be a large part of the solution.

Here’s a quote from the end of chapter 1 which summarizes things nicely: Books on art don’t promise to make you an artist. All they can do is give you some tools, techniques, and thought processes that other artists have used. Uncle Bob and a number of his friends start right at the beginning, Meaningful Names, and start building from there through Functions, Comments, Objects, etc. Each subject gets its own chapter which is then packed with lots of tidbit on the subject with subtopics getting slightly over a page on average. In my copy it is rare that one of these subtopics does not have at least one thing underlined.

One of the strongest parts of the the book are chapters 14 – 16 which are almost like peering over his shoulder as he refactors code. It is one thing to just read a couple paragraphs on something, but another completely to see it put in practice. The next chapter potentially tops those ones by providing a nice list of Smells and Heuristics that you could take with your into a code review or to build a coding standard for your organization.

Perhaps it is my paranoid, Hobbes-ian bent, but I fear this book might also cause just as much damage as it prevents. Too many people I fear will take what is presented as gospel since the authors are well known in the programming world and if they say it then it must be true (in all situations). Having seen a number of codebases, applying all the heuristics that are shown without the appropriate refactoring would result in a collection of source files that are anything but clean and easy to read.

  • Comments are always failures (page 54) – Well, let’s just remove all the comments and / or stop creating them. Too often the code I encounter would be made clearer with a generous helping of comments; or a major refactoring to make them unnecessary. But please don’t do one without the other.
  • Small! (page 34) – In the discussion around function length he retells a program written by Kent Beck where every function was just two, or three, or four lines long. … That’s how short your functions should be. Yes, it is the context of a specific example program, but how many people are going to see that and refactor their nine line function into three three line ones? Later in

Steps were taken to say that everything shown needs to be taken in the context the reader is operating in, including undoing a refactor since it made the code less fluid. I’m not sure it was expressed explicitly enough or often enough.

My only other complaint, and it is a minor one, is the inclusion of sixty pages of source code in the appendix. This sort of thing would be ideal to host online on the book’s website. That alone might have been able to reduce the code of the book a couple dollars and reduced the paper required.

Overall, I think this is a great book which would be at home in a university level course and in the real world either creating or evaluating code. This one is definitely going onto my desk at work. You can get your team’s copy of Clean Code: A Handbook of Agile Software Craftsmanship from this link. Just remember not to forget your brain when applying the techniques to your code.

Posted on September 1, 2008 in Books by adam2 Comments »

I received Steve Loughran and Erik Hatcher’s Ant In Action to review well over a year ago and I actually thought I had posted a review for it before now. Digging through my site though it appears not. Which sucks, because this is a great book.

When I was reading Ant In Action originally I was working in a Java shop as the QA / Tester. (I have also been the build person at various times for different organizations.) One of the first things I tend to do on a project is extend the build system to include static checks to try and catch the low hanging fruit. On this particular project it meant wading into the quagmire that was their build system. Without a word of exaggeration, one task required ant to call perl which shelled out to call ant. After reading this book I could have easily wrestled the build into submission.

The structure of the book was well thought-out, which is not surprising as this is the renamed second edition of ‘Java Development with Ant’, with a consistent example that starts out with compiling a single class and ends up dealing with a number of the pains one experiences with large-scale enterprise projects. There are any number of pages on how to use Ant in its most basic guises, but it is these more advanced topics where the book shines.

  • Working with big projects – Build chaining is hard, and likely always will be. It does not however have to be messy and chaotic
  • Managing dependencies – A nice introduction to the Ivy dependency management system. I’m quite convinced that if you are not migrating from Ant to Maven you should be incorporating something like Ivy into your build system. This chapter explains to start managing your dependencies and even includes a section on publishing to a local repository which is important in a corporate context.
  • Working with XML – The build system I was working with had a large number of servers that could be deployed to. Each had its own set of XML files which differed by only a line or two. The Working with XML chapter would have cleaned this up a lot.

Those are of course in addition to the earlier chapters where you learn about datatypes, parameters, packaging, deployment and other day-to-day items. Also, it is definitely worth mentioning that the first topic really explored in depth is integration of JUnit into your build (Chapter 4) which absolutely appeals to the tester part of me.

Because I can’t just be happy with a well-written book that taught me a lot, here is what I would change for the 3rd edition if they produce one.

  • Include blank pages at the back of the book for note taking
  • While it is great the integration of static analysis tool Checkstyle is shown, including FingBugs or PMD would have rounded out the section by including dynamic analysis as well
  • Use the CruiseControl as the server in Continuous Integration chapter. They explain why they did not, but that is the server most people reading this will be using.

Overall, this is a fantastic book that should be on the shelves of any Java shop still using Ant and should be mandatory reading for anyone whose job involves modifying Ant’s build files.

Now all I need is Ant In Action-esque book for Rake.

Posted on July 1, 2008 in Books by adamNo Comments »

(This was originally written as a review for DDJ but I keep getting caught in a spam filter there so reviews will now show up here)


Since I started reviewing books, I’ve been sent at least one or two a month to read. I have various strategies for managing the queue, but once in awhile a book arrives whose title and concept makes me push it to the top of the pile immediately. Dorota Huizinga and Adam Kolawa’s new book, Automated Defect Prevention, is once such book. It is unfortunate then that I cannot write a glowing review of it.

Rarely is anything as positive or negative as it seems, and this holds true for this book. I thought the size of it was appropriate, as was the choice to publish it as a hardcover. I also liked the layout with each chapter following a similar structure which would make looking up information easier if being used as a reference. I also was impressed by the design section where they laid out their Critical Aspects of Architectural Design. This could easily be turned into a checklist to drive the product’s testing efforts. It should also be noted that Dr. Kolawa is the CEO of Parasoft, a large vendor of testing tools. I am always skeptical when someone in that sort of position writes a book relating to their market for fear of book scale advertising. Automated Defect Prevention does a remarkable job of being vendor and technology neutral throughout and I think Parasoft is only mentioned one or two places, and they were relevant to the topic being discussed.

Those positives aside, there are three things that prevent me from recommending Automated Defect Prevention.

Automated Defect Prevention is a complete software development methodology that leverages automation for, amongst other things, repetitive tasks, organize project activities and tracking project status. The problem with their methodology is that it feels like a philosophical mash-up of Deming, CMMi and Six Sigma. The Six Sigma heritage shows through with every practice having a measurement section even if it sometimes feels unnatural; when discussing requirements and design for instance. CMMi is brought into the mix through the customization of the best practices. This is the equivalent of tailoring a CMMi practice area.

While Automated Defect Prevention does have moments of agility the methodology appears to be advocating the classic waterfall style of project management with its inherit problems. Iterative design is mentioned, as is unit testing, but those are undone by the large upfront test design that occurs both for the unit and functional tests. Unit tests tend to work best when they are evolutionary which is one of the primary benefits the Agile community has given to development world. They also recommend that modules be owned by specific developers which removes the notion of shared code ownership / responsibility and creates knowledge silos and clusters within the organization which can be problematic in my experience.

The main problem I had with this book however is its use of the term Best Practices and its implication that one particular methodology is appropriate for any organization. In software development the existence of Best Practices is a myth. There are certainly plenty practices that ‘Have Worked Well For Others And May Or May Not Work Well For You’ and Automated Defect Prevention even presents a number that I often recommend to people. The problem is that the target audience, principally project managers, graduate and post-graduate students are not likely to have the experience to recognize this and attempt to implement the presented ideas based solely on its inclusion in the book without questioning why they are doing it or whether it is appropriate in their context. The number of companies that adopted Six Sigma (it too was a Best Practice) in the late 90s only to very publicly reject it as a mistake (3M most recently) shows the danger in this.

An excellent title is certainly part of the process in making a book a success. It needs to be backed up by equally excellent content. If you are looking to improve your deliverables in either quality or delivery date through a formal methodology, you would be better served getting a well reviewed book on each CMMi, Six Sigma and Agile and creating a custom one that was designed to work for you than try to implement someone else’s and end up customizing all their Best Practices to fit your context. And that is a shame; nothing appeals to the tester portion of me than the utopian ideal of Automated Defect Prevention.

Posted on June 13, 2007 in Books, Quality by adam3 Comments »

I stumbled across the term ‘trained intuition’ in a mailing list post a couple weeks back in a reference to Malcolm Gladwell’s book Blink. Having finished it, I’m more or less convinced that any tester who wants to know how and why they do things a certain way should have read it.

Blink is about how our unconscious brain picks up information and processes it long (in thinking terms) before our conscious brain can do either. Let’s do some example comparisons

  • A curator at a museum has his staff ‘hide’ pieces they are considering buying in places where it will jump out at him unexpectedly so he will get a fresh, sudden view of it. This removes a lot of external influences on his opinion of the piece and lets him concentrate on what he is supposed to be concentrating on; the piece. I’ve come to realize that I find the best / greatest quantity of bugs in an application’s interface when I first encounter it rather than when I have been living with it for some time. I’ve also seen obvious bugs that I missed reported by external testers, usually within the first little bit of them having the product. So does this mean that we should test the interface first? Not necessarily, but if you don’t, what Blink tells us to do is write down any ‘gut’ feelings we have about it when doing our first testing. There is usually something triggering that response. Also, when preparing testing assignments, should leads assign interface testing to someone who was involved in the mock-up creation?
  • Thin-slicing is when ‘our unconscious is able to find patterns and situations based on very narrow slices of experience’. This strikes me as being what we refer to as a ‘sanity test’; test little bits of different parts of the app and form an opinion. Sometimes the tests are automated, so feelings and guts are removed from the equation, but often they are done manually as well. And sometimes the build just doesn’t ‘feel right’.
  • Thin-slicing and trained intuition also comes into a hybrid play in deciding when you have done enough testing, or at least I think it does for me in my Rapid Software Testing-ish approach I use. Based upon my experience and learning, I reach a point where I am ‘comfortable’ with a feature / release. I know I haven’t found everything, but it meets some internal criteria I have developed. This internal criteria and the ability to become comfortable is what I consider my trained intuition which is fueled by the thin slices I get from testing.
  • Mr. Gladwell spends an entire section discussing the Millennium Challenge in which the ‘bad guys’ handily beat the US military might in a war games exercise. The drubbing came largely because the ‘rouge leader’ reacted to situations and gave his leaders the ability to make decisions as compared to the bureaucracy and over analysis of the US military. Swap ‘testing’ with ‘war’ in war games and I think we have a powerful metaphor on our hands; the use of rapid testing techniques and exploratory testing lets us use our trained intuition and react to information as we discover it our testing as compared to making exhaustive risk catalogs and hundreds of test cases that must be run to achieve our testing goals. You can only find a bug in the software by testing it*; sitting in a cube somewhere writing test cases which will likely be based upon old information by the time you get to actually testing is not only wasteful, but dangerous.
    * I’m talking testing here, not requirements or design review
  • In another section, he talks to some researchers that made a taxonomy of all the possible combination of muscle movements in the human face (up to 4 muscle groups being involved). The word taxonomy jumped out at me as I had seen Cem Kaner present recently where he discussed risk taxonomies that affect software. His presentation is not online yet (as far as I can tell), but here are two papers his students’ did on the subject: here and here.
  • One thing that I found interesting, but haven’t mapped to software testing in any meaningful was the explanation of why time seems to slow down when you are under stress. I had read previously that it had been a documented effect of stress and have experienced it myself when involved in a car accident (drunk driver with no license due to previous DUI ran a stop sign hitting the baby’s door then fled the scene) but it was the first time I had seen the physiology around it explained.

Blink is one of those books which was constantly ‘ah ha!’ moments and aligning ideas I’ve had for some time now and in talking to others who have read it they experienced the same. Understanding why you do things is an important and necessary first step to improving how they are done. And improving things is what being a tester is all about.

You can buy it from Amazon here.


For another review / article on this book, see Michael Bolton’s article Blink . . . or You’ll Miss It

Posted on April 17, 2007 in Books, Quality by adamNo Comments »

My review of is now online over at DDJ.

I also just realized that my review of was published back in September also over at DDJ.

Posted on February 18, 2007 in Books, Quality by adamNo Comments »

The book PMD Applied got my attention via a Slashdot review. While I have not read it (but would happily if I was sent a copy), it’s arrival is timed well since I mentioned it here as part of generating metrics about your code. It might be worth picking up if you are getting into metrics with your own codebase.

Posted on February 13, 2007 in Books, Quality by adamNo Comments »

was mentioned this weekend in an article in the paper in the context of how global warning is the new religion (with Al Gore and Bono as its figure heads no less). Ignoring the whole Florida-is-going-to-be-under-water thing for a second, there is a concept explored in it that is very relevant to testing; especially to those on the fringes of the Context Driven school of thought. It is this:

The Prospective Mind: an intellectual stance that is proactive, anticipatory, comfortable with change, and not surprised by surprise.

To me, this nicely encapsulates both exploratory testing and a decent chunk of Agile methodologies.

Posted on February 12, 2007 in Books, Quality, Video by adamNo Comments »

Esther Derby, along with Diana Larson recently released a book on . They subsequently took their show on the road to Google as one of their Tech Talks and is the subject of today’s video.

The video is pretty good and you can tell that they are used to doing similar presentations which is refreshing from some people that clearly are uncomfortable being in front of people / cameras. From a content perspective, I tuned out somewhat at the 37 minute mark when they started trading stories about favorite and least favorite retrospectives they had participated in. The gist of the presentation seems to be that the Agile kids have all sorts of checks and balances in place to rapidly detect and correct errors in the code, so why not have the same sort of thing for the methods, processes and teams which implement the code. Seems pretty logical.

Anyhow, the framework they propose for running a retrospective consists of five phases

  1. Set the stage
  2. Gather data
  3. Generate insights
  4. Decide what to do
  5. Recap and close

Some unrelated things from the presentation extraneous to the core content

  • They pair-wrote the book, meaning they only wrote something when in the same room and using one keyboard
  • Google employs sign-language interpretors for their presentations (she is sitting to the right of the podium in some shots). I have yet to see that listed as one of Google’s elaborate perks, but is pretty cool.

Direct link here.

Oh, and my review of Esther’s other books is here.

Next Page »