Posted on December 31, 2007 in Quality by adam1 Comment »

A couple weeks ago a former student of mine pinged me to let me know she had recently got a new job as a tester and her first task was testing the product’s installer across a number of platforms. This couples with the Dr. Project code sprint this weekend (where I will be looking at installation stuff it seems) indicate that it is time I get my thoughts on installation testing recorded (as they occur to me, for the duration of the ride into work)

  • There are basically two types of installer; native and wizard. Native installers are things like RPM for linux and packages for Solaris. Wizards are native to windows, but I have worked with them on unixes as well (at one point the linux java distribution was a shell script with the entire binary uuencoded in the body). I prefer that products use the native installer type since
    • it means all the system admin tools can access and control the installed resources
    • the necessary libraries are already installed. Nothing is sillier than having a machine that will be running headless and only running server processes but needs to have the entire X Windows libraries (and its myriad of bugs / exploits) installed just to load your product
  • Refresh your OS often. This typically means you want to invest in something like ghost if you are on windows and not using virtualization or if you are, then just copy over a clean virtual image.
  • Wizard-style installers for unix are generally java based. This means it is often okay to extensively test on one platform and then quickly skim through the other platforms making sure to check any platform specific things (placement of statup scripts for instance)
  • Perhaps more important than the installation process is the uninstallation process. Essentially, everything that was put on the disk by the installer should be removed. This does not include things your application created through. Things like config files made after installation or log files should be left for manual removal.
  • If there are things left for manual removal, don’t give the user an error message that a directory could not be removed. Instead provide a nice message saying that you removed as much as possible, but there are some files that need to be manually removed.
  • Follow the system conventions for installation. On unix platforms this typically means that things are installed under a single directory and anything else in the filesystem are sym links. It drives me bonkers when non-system products install things all over the place. On windows this means putting your application under Program Files/Your Company Name/Your Product which means you better be able to handle spaces in path names
  • If you include a necessary library, don’t pollute my environment with it. I can consistently get XP Pro to bsod on me by running a particular java application. I’m pretty sure the problem is actually that the Oracle 9i installer puts java 1.1.8 (yes, 1.1.8) into the path making it the default system java. If you are providing something like a specific version of the jre, spend the time to teach the rest of your products where to find it (like, say, the through the registry; that is what it is there for).
  • Make it easy to install dependencies. The ethereal installer prompts you at the end if you want to install the 3rd party library it depends upon, and if you say yes it launches its installer. This is nice. What is not nice is often the unix installation paradigm of digging through a README and then having to hunt the internet for a source you (almost) trust to provide some bit of functionality. If you ship your software on CD this likely means having a ‘3rd party’ directory (or something similar) which contains everything that is needed to get your app running. If it is distributed via the web then links right off of the download page are always nice.
  • pretty much all the installation methods can be automated. With wizard installers this is often achievable through the ‘console’ or ‘silent’ mode. These modes need to be specifically enabled but are a great time saver. Bribe your build person to turn these on for you.
  • Learn about how your platform loads its libraries. DLL Hell could have been largely been avoided by including the version of the DLL your product needs in the directory with the rest of your application as the current directory is first in the list of places checked. Breaking other things on a machine to get your product working is generally considered bad form.
  • Use an absolutely fresh installation of your OS every couple builds. We were once burned by a dependency on a MS Office sneaking into our product at the last moment (which resulted in a bsod of all things)
  • Test with the current security patches installed on the system for most of your testing, but at some point you have to draw a line in the sand and say there will be no more system configuration changes. This is actually quite the management challenge. At HP we were dependent on the ISAPI functionality of IIS having certain abilities but they changed some of the things we were using in a service pack after we had drawn the line. We redrew it.
  • Use an appropriate permission model. If I want to install something in my home directory that does not use a privileged port and does not pollute the filesystem then I should not have to be root. This means I don’t like seeing permissions checked based solely upon the username (Administrator on windows, root on unix) or by group name. If you need to do something, check if the user has that specific permission. Who knows, perhaps that user has a funky set of permissions which lets it write to /etc without being root.
  • Use your IT person (or friends in similar roles) as oracles. If explaining what your installer does raises their hackles, there is likely a bug
  • Don’t assume users will read the instructions, but make sure they are accurate just in case you have the oddball who reads them trying to follow them.
  • Write your installation documentation at the correct level for your audience. If they are DBAs, you don’t need to explain what a tablespace is for instance.
  • Be consistent with paths if part of an application family. I consider it a bug that I have in Program Files both an HP and an Hewlett-Packard directory. Similarly there is an IBM and ‘IBM fingerprint software’ one.
Posted on December 31, 2007 in Quality by adamNo Comments »

As I illustrated by writing about building a shed (here and here) I tend to view everything through the lens of a tester. So when our dryer broke over Christmas it was no great surprise that as I went through the diagnosis and repair process I was mentally comparing things to testing. In no particular order:

  • Use the appropriate tools – I took the skin off my knuckle using a pair of pliers getting a screw off instead of an appropriately size socket.
  • Know how to use your tools – because my dryer is electric, I needed to check the various fuses, elements and thermostats to find the problem. The problem is of course that I know very little about electricity and while I have a snazzy new multimeter I have no idea how to use it.
  • Consult your oracles – I took a suspect fuse into the appliance parts store along with my multimeter and got a quick lesson on how to use it from the woman at the counter. In this context, I consider her my oracle (and is a great example I can resuse regarding how oracles do not have to be software based).
  • Fancy does not equal better – I paid an extra 20% on my multimeter to get the digital one over a similarily equipped analog model. Digital is better than analog, right? Well, the one they use at the appliance store is analog which is the one they prefer since you can ‘see the bar move’. Doh.
  • Upon analysis, most things are pretty simple – I had no idea how a dryer worked until Wednesday (never had to think about it before). But once I took off the back cover I realized that it just sucks air in the back over an element and into a spinning drum. Thats all. I’ve found that a lot of the technologies I have worked with over the years are very similar.
  • Instructions are there for a reason – The part I needed was actually 2 parts. I could have replaced only the dead part, but the other one would have killed it again in a couple weeks. Apparently there is someone around this area the parts woman has sold 3 kits in the last couple months to because he doesn’t follow instructions to replace both components.
  • The obvious problem is not always the correct one – Since the drum was turning but there was no heat, I figured the heating element was dead. It seemed logical at the time. That wasn’t it though. Had I just bought a new coil I would have spent ~ $70 and still not had a working dryer. (For the records, I initially went to the parts store to do just this but the guy at the counter sowed enough doubt in my mind about the soundness of the diagnosis that I went back and investigated more).
  • Credibility counts – A credible tester is one whose bugs are dealt with in a better light than non-credible ones. They also get greater freedom to do funner testing than rote script execution. On a marketing basis, credibility also leads to loyalty. I’ve dealt with Oshawa Appliance Parts on a couple times and have no problem recommending them.
  • Rabbit holes are fun, but have an exit strategy – Even though I was trying to fix things myself, I had already executed my plan b which was to get a professional repair person to look at the dryer. One thing I must constantly watch for is heading down a rabbit hole exploring a problem or implementing a solution when there is someone better suited for the task. By calling someone (who I later cancelled) I had ensured that there was an end solution.
  • An oracle might not actually be an oracle – While tinkering during the ‘repair person might arrive’ window, a high school friend of my wife dropped by for a visit. I had completely forgotten he was coming so when he I got called up from the basement and she introduced me to someone at the door who wanted to look at the dryer I figured it was the repair person. Turns out he is just friendly and has fixed a dryer or two in his day.
  • Oracles can be wrong – Remember, oracles are a heuristic which means they are fallible. When the friend said “I’ve never seen a dryer with only one element” I raise a bit of an eyebrow. When he said a few minutes later he said “I’m not an electrician, but…” I got nervous. As the point about mentions, at this time I thought he was the person I had called so was my oracle. Turns out he also misdiagnosed the problem so I would have spent $12 incorrectly if I had followed his advice.
  • Screenshots are great – I took pictures of which wires plugged into where any time I disconnected something. This is no different than us capturing a screen with something interesting happening on it.

Oh, and in the end I did manage to get it fixed, much to my wife’s amazement. It was the high temperature thermostat that had broken which fried a safety fuse ($45). And it actually works better than ever.

Posted on December 28, 2007 in Quality by adamNo Comments »

As testers, we spend our day questioning whatever it is that we are testing. Things like

  • Is it secure?
  • Is it accessible?
  • Does it meet the requirements

are all fodder in our quest for information. What I have been finding useful the last little bit are questions of a different sort.

  • Why? – This simple question has a tonne of power. Not only does it force someone to think about why they are doing something, but it forces them to do it in a way that can be articulated. I’ve seen it a bunch of times when a developer is explaining an implementation strategy and suddenly realizes that there is a pretty big mountain in their way. I’ve also crawled my way out of a few rabbit holes by asking myself it. I suspect that “I don’t know” is even a good answer, though maybe not that great for your product. (You can, if necessary, use this one multiple times during a conversation.)
  • For how long? – This question is useful for uncovering design assumptions. Here is an illustration of a variation of this (more or less) taken from an open msn window:
    • Developer: Since there are only 4 message types, we’re going to use 1 template and fill in things dynamically
    • Me: This requirements document is changing weekly, how long until they add a 5th? Or 6?
  • (which leads us to) what would happen if … ? – This can be either an out-of-the-blue question or a follow-up to one of the previous ones. To use the preceding example, I added “When they do add another, what do we have to do? Lots of coding? Little coding? Does the design allow for that situation elegantly? Without having to kick the server?”

The common element between all these questions is that they create greater understanding of not only the product, but the thought processes and rationale of the people creating the product and if used properly, yourself as well.

Posted on December 27, 2007 in Quality by adam1 Comment »

James, Michael and Matt all have a testing “kit” of “challenges” that they carry around that make testers think about testing. Which is all well and good, but is not going to help them much when on a customer site. (Well, it might distract the client while they think about the solution to a particularly difficult problem…).

The tracing paper post got me thinking about what I would put in an actual testing kit that I could take on-site and be effective.

So with 3 days of thought and without actually building or using it, this is what I would put in a testing kit.

  • To start with, you need something to carry everything in. Since I don’t picture myself having to fly to any client sites, it can be slightly larger and not have to survive the chaos of a luggage system, just the trunk of at my feet on the train. Give that I can use a plastic file box which you can get for under $20 at an office supply store. The things I like about this one is that it has a flat(ish) lid which could double as a desk, the snap-close lid, the handle and the compartments on top are a bonus.
  • tracing paper
  • Whiteboard markers (a couple colours) and a brush; you would be amazed how many places have non-working markers on the shelves of their whiteboard
  • A digital camera — for capturing whiteboard content or even screenshots
  • A couple different lengths of regular ethernet cable
  • A crossover ethernet cable
  • A wireless router; I know how to configure them securely and don’t like cables. Of course, paranoid IT departments often will nix this one I suspect
  • A notebook
  • A couple pens and pencils
  • A calculator
  • And of course, there is a laptop to run things on. I’m currently thinking either a nicely equipped MacBook or ThinkPad. The MacBook is unix based and is oh-so-pretty, but the ThinkPad is almost bombproof. Regardless it needs a significant amount of memory to be able to run at least one VM instead of polluting the main install. It would need to have some sort of padded sleeve; either commercially made or you could make yourself one out of an old foam sleeping bag pad if you are poor.
  • Nice headphones. Not the bud style, but cover-the-ear with noise cancellation
  • A laminated set of cheat sheets (or 3)

And to both date myself and use a local cultural reference, the one thing that any testing kit needs but won’t actually go in the box is your brain.

What else would you add, or remove from the kit?

Posted on December 22, 2007 in Quality by adamNo Comments »

Scott Berkun has been doing a couple web site usability reviews recently. In review 5 he overlays a wireframe on the site he is talking about. It occurs to me that I do not have the photoshop skills to go apply this technique against a live copy of a site (efficiently).

I do however have enough skill to print it out and trace the elements out. You don’t want to block out things on the actual print because you want to be able to see the blocks in isolation. Which brings us to the tracing paper. What better way to trace something than with a product designed specifically for it?

So the next time you find yourself testing a web layout, think about pulling out the trusty tracing paper and see if the layout makes sense without all the noise of the content.

Posted on December 19, 2007 in Quality by adamNo Comments »

Matt Heusser this last week has kicked off a number of discussions surrounding GUI automation tools. A phrase that came up during the course of it was ‘highly repeatable.’ It also happens to be one that causes some pretty strong emotions in the testers I swim with.

Hard Coded Test Values

Usually the term is met with scorn and disdain and implies hard-coded sets of test data that will remain unchanged forever and ever and might catch a bug the first time it is run with the probability decreasing with each run. See this university course’s test case examples for a set of highly reproducable test cases that does indeed get a pretty strong reaction from me. But maybe bug discovery and product exploration is not the intent of the test. Maybe you want to ensure that the bug that really, REALLY annoyed a large Dutch brokerage never, EVER comes back — or if it does you want to know about it right away.

  • Repeatable: Yes
  • Predicable: Yes
  • Useful: Not really. You can usually use a later strategy instead with a bit of thought and get better bang-for-buck
  • Summary: Pretty close to evil

Data Driven

Moving away from hard-coded data sets takes us into the hazy realm of data driven tests. The actual scripts tools follow when data driven are just husks that pull the critical test data from an external source. These external sources are often CSV files which are easy to maintain and have business people add things to but they still offer a constrained set of test data; if it is not in the file, it is not in the test. This lets you do more with less which is usually a pretty state to be in. It also means that your fix-the-broken-scripts cycles are shorter.

  • Repeatable: Yes
  • Predicable: Yes
  • Useful: More so than using Hard Coded Test Values, but still pretty limited
  • Summary: Lawful Evil – “May or may not help someone in need”

Data Driven (on drugs)

To use a random baseball comparison, what would happen if we injected our data driven tests in the butt with TGH (Test Growth Hormone)? We would still be data driving them, but instead of creating our test data, we would let the scripts themselves create the data. This is the strategy I use most often.

  • On screens which just display information from a data source, teach the script to request a random record
  • On screens which have to insert or manipulate the data, I try to teach the script the rules around the screen I am automating. I then add an element of randomness into the mix causing the script to think of its own data and do its own verification. (I’m not lazy, I’m efficient…)

For this strategy to be truly effective you turn on the scripts and let them run for a pretty decent period time. There is lots of math (that I don’t claim to know) which will determine / prove that a length of time is correct to find most of the issues through randomness.

Model Based Testing

The latest evolution in automation is model based testing. I don’t claim to know much about MBT, but as I understand it currently, this takes the notion of teaching the scripts where to find the data to the point where the whole thing becomes a giant state machine. Not only does it know about the data, but it also knows about the rules and relationships regarding the data. Repeatable? Again, yes, but in a non-predictable manner. Evil? Not even close. We’re approaching Nirvana actually. Useful for the long term? Absolutely (assuming you can handle changes to the model elegantly)

  • Repeatable: Given enough time, a test will be repeated.
  • Predicable: No
  • Useful: Absolutely
  • Summary: Is that the entry to Nirvana I see on the horizon?

So do ‘repeatable’ tests deserve (all) the bad press? Like most things the answer is ‘it depends on the context.’

(Before I hit ‘Publish’ it occurs to me that I did not properly capture the difference between DDoD and MBT? After seeing Harry Robinson at CAST last year I know there is a difference but have no idea how to illustrate it currently and since this is a blog and not a magazine I can post 2/3 baked ideas.)

Posted on December 19, 2007 in Quality by adamNo Comments »

I stumbled on these stickers this morning.

I really need to resist the urge to buy them.

Posted on December 14, 2007 in Quality, Ruby by adamNo Comments »

Kevin Skoglund is (according to his site) a web developer and instructor specializing in Ruby on Rails, PHP, SQL, HTML, and CSS. He is also running a series of articles on ‘Testing in Rails’. Currently he has the introduction and the first 5 of X parts posted. This search will give you all the articles in the series though I suggest you just subscribe to his rss as there are other good things that come out from him too it seems.

For those too lazy to click one of the above links, here are direct links to the articles out at time of writing.

(found via Mike Gunderloy)

Posted on December 13, 2007 in Quality by adamNo Comments »

I am a big proponent of cheat sheets over rote memorization (which is one of my complaints about how Google hires). When teaching I make sure to point out Elisabeth Hendrickson’s excellent Test heuristics Cheatsheet to every class. The reccomendation goes something along the lines of ‘print this out and pin it over your monitor.’ Given that prelude, the discovery of Dave Child’s list of cheat sheets should not be too shocking. There are two pages of them currently, but here are the most relevant to testing.

Other cheat sheets (updated as I find them):

Posted on December 13, 2007 in Podcasts, Quality by adamNo Comments »

Leonard Maltin, movie critic extrordinaire was recently on Tech Nation to plug his new books. Erm, I mean to talk about technology as relates to the movie industry. Here are the choice bits that can be taken in a testing context plus some interleaved commentary. You can decide which is which. 🙂

  • An old television expression: we can fix it in post
  • A newer movie expression – we can fix it in the di (the digitized format)
  • A common project manager expression – if we ship it now, we can fix it in a patch
  • A parady of Ford’s old slogan is ‘Quality is job 1.1’ (heard first from Harald)
  • This leads us to: Because we can, we do
  • Esentially the technology that films are shot is unchanged from Edison’s Great Train Robbery. There are a couple though who are doing new, cutting edge things. (traditionalists vs. contextualists?)
Next Page »