Posted on September 28, 2007 in Quality by adam1 Comment »

BAD is one of those really good acronyms you come across now and then. I’m not sure where I first heard it or when, but I’ve been using it for awhile.

Something is BAD, or Broken As Designed, when it does not make sense from a consistency, business or functionality perspective but meets the requirements criteria. I’ve found that things tend to be BAD when the specifications and/or release decisions are handled by people who do not know the market or worse, your product.

I’ll try to illustrate the concept with an example. Imagine you were a lobby organization who was trying to make itself more relevant to a hipper audience. One way you could do this is through a web survey using an iPod or other cool toy as incentive. The typical workflow for this is contact information -> survey questions -> enter into draw. Of course, the flow of contact information -> enter into draw -> survey questions also meet that requirement.

Spot the problem?

In the second flow a respondeant is entered into the draw without having provided answers to the survey — which is the whole point of the survey in the first place.

Now, the second flow can be mitigated in a couple ways to make it less BAD.

  • Do not send out the draw entry confirmation email until after the respondant has gone through the entire survey flow, and not upon completion of the contact form.
  • You could hide something in the Rules of the contest that states you must complete all fields to be eligable for the prize.
  • If you do send out a confirmation email right after the contact information section then put in a completed survey as an eligibility requirement.

Also, one has to remember that BAD is a matter of context. This post was inspired by an actual situation I experienced recently, though I’ll keep the companies invloved quiet to prevent mass exploitation of their survey. Each legal jurisdiction has their own rules and regulations governing such contests. In Ontario, some of these determine whether something is a contest (those with a skill testing question) or a sweepstakes and why you hear things like ‘for no purchase entry call 1-800-555-BEER.’ I suspect the latter has something to do with it’s perception of BAD-ness.

Of course, it is better to avoid BAD-ness from the onset.

Posted on September 28, 2007 in Quality by adam2 Comments »

From my limited view into the goings-on in the .NET world, there is a bit of noise being made about the launch of xUnit.NET which is designed by the same guys who wrote NUnit and is, as one person put it it’s the anointed successor.

The noise I speak of is such links as

This looks to me much like the transition that I am also starting to see in the Java world from JUnit to Test NG (see also the GTAC 2007 video).

If you are a .NET literate tester, I suspect there are worse things that you could learn is how to drive this tool. Specifically

  • Running NUnit tests natively in it
  • When to, and more importantly, when not to migrate
  • Migration strategies
  • How existing NUnit patterns apply to the new features
  • Integration with CI servers

These things will certainly assist you achieve the mission of helping your group produce a better product.

Posted on September 26, 2007 in Quality by adamNo Comments »

Guy Kawasaki did a webinar yesterday afternoon on The Art of Evangelism in which he outlines his vision of what evangelism is an how to do it properly. WebEx has the slides and the recording available.

Here are his evangelism steps.

  1. Make meaning
    • Create a new good thing
    • Perpetuate an existing good thing
    • End a bad thing
  2. Make a mantra
    • 2 or 3 words (like, oh say Quality through Innovation)
  3. Role the DICEE
    • D – depth
    • I – intelligent
    • C – complete
    • E – elegant
    • E – emotive
  4. Niche thyself
    • Unique product
    • High value to the customer
  5. Let a hundred flowers blossom
    • Don’t sweat if the people buying your product are not the ones you planned on
    • Instead ask them why they did buy it and do more of it
  6. Make it personal
  7. Find the true influencers
    • Hint: it is rarely the CXO
  8. Enable (product) test drives
  9. Look for agnostics, not atheists
  10. Provide a slippery slope
    • Make it easy to get sucked into the product
    • This is how linux made inroads
  11. Don’t let the bozos grind you down
    • Losers are bozos
    • So are supposedly smart people

And some random bullets

  • Great products are easy to evangelize, crap ones are not
  • Evangelize internal people just the same as you would external folks
  • Great evangelizers are also great demonstrators
Posted on September 26, 2007 in Quality by adamNo Comments »

A job posting is the first filter a candidate must (convince themselves that they) pass. It is also the first filter the candidate applies to your organization. One thing I’ve picked up on recently in the job descriptions that pass by me is how many of them list specific tools unnecessarily.

For instance,

  • Instead of having ‘linux’ on the job post, put down ‘unix.’ Do you really care if they have never used linux but are a Solaris god?
  • Instead of having ‘cvs’ (or ‘subversion’), list ‘version control systems’
  • Web automation tools instead of QTP
  • etc.

Sure, some positions might require a specific tool, but unless that need is explicit I think it is better to list the generic type or class of tool. After all, if they know what they are doing, and not just the specific product then they just need to adapt their knowledge to the syntactical nuances of the other app.

In tight markets (like Toronto’s right now) you don’t want to artificially limit the number of people who apply, nor because they do not exactly match the profile you advertise or because they do not want to work with the technology you are using. The second problem is unfortunately one you will encounter more as the seniority of the role you are hunting for increases.

Posted on September 25, 2007 in Quality by adamNo Comments »

When creating or developing test documentation, one thing you need to consider is the permissiveness policy of the application. Is it Least-Permissive or Most-Permissive?

A Least-Permissive policy is one that is the most secure, but remember that things that are more secure are also less user-friendly. In this sort of situation the fallback position is always a deny. If for instance you have some sort of multi-step authentication process and any of the steps say deny, then you cannot get access — even if every other one would have allowed it

A Most-Permissive is the opposite (duh). If anything in the processing chain says that the action is allowed or you are unsure, then it is.

You might be tempted to use a hybrid approach where some parts of the application is one and another is the other, but this produces an internal inconsistency and will likely confuse users about what they can be expected to be able to do.

Both have their place in the application landscape, but these are key context determiners, so make sure you know what your application is trying to follow.

Posted on September 25, 2007 in Quality by adam1 Comment »

Too often when an application is released into production, the stream of information about it comes to an abrupt halt. Or sometimes it starts to hide in the shadows.

When I was Points, we had the system configured in such a way that applications would email exceptions to product specific mailboxes. I then had to go through these mailboxes and compile the list of exceptions and number of duplicates (which took about half a day at the beginning of each month).

In what seemed a throwaway comment in an email thread I was on today, someone mentioned that their application automatically logs any exceptions that happen in production as issues in their bug tracking system. (And apparently, since it is FogBugz, it does magic regex checks of existing stack traces and will increase an occurrence counter instead of logging a duplicate). This is a seriously cool, if not new trick and I think this is the next step of having applications report on their ongoing health.

The key evolutionary point is that the totals are derived from the bug system automatically and in the email context, you had enter them in the bug system anyways once you decided something was going to be addressed. You also replace a separate source of information with a more consolidated view which is always a good thing.

I’m starting to think that this type of phone home functionality should be part of any ‘supportability’ review. This is easy to implement on the application side of things if you support your own servers (in a SaS or pure web type application) but things might be a bit hairy if your customers are the ones hosting the server and application if they are paranoid about information leakage. The bug system also needs to be able to support this. Windows XP and newer also have a similar feature when an application crashes.

As mentioned above, apparently FogBugz has it, and I would be surprised if Bugzilla or Jira didn’t have this functionality somewhere as well.

But of course, if there is no commitment from management to address exceptions that are happening in production, then all of the setup necessary to implement this is wasted.

Posted on September 24, 2007 in Quality by adamNo Comments »

Derek Sivers’ 7 reasons I switched back to PHP after 2 years on Rails article is getting a lot of link love if for no other reason than it is about Ruby, and has a rather inflammatory title. I don’t know enough of either language to comment on the technical merits, but the section quoted below sums up the article for me.

I don’t need to adapt my ways to Rails. I tell PHP exactly what I want to do, the way I want to do it, and it doesn’t complain. I was having to hack-up Rails with all kinds of plugins and mods to get it to be the multi-lingual integration to our existing 95-table database. My new code was made just for me. The most efficient possible code to work with our exact needs.

This brought to mind the state of testing education. Too often schools / technical colleges are churning out freshly minted QTP or similar drones who can make QTP jump through some impressive hoops. The problem is that I don’t know many organizations that actually use the licenses they purchased. Instead, I am seeing more and more companies explore Selenium and Watir. These new grads see every automation project as a QTP one (and if it is slightly more complicated than the tutorial then they are screwed).

(Entire organizations can fall into this trap too by trying to force and application to do what it was not designed to do. Just because it can made to jump through the flaming hoop doesn’t mean it should. I would argue that using Selenium for performance testing is one of these questionable hoops.)

Derek is not a programmer by trade (though maybe he could be considered such now) and so does not have the computer science folklore to back him up, instead he learned by doing and doing so in PHP. Or you could say he had the HOW to do something, not the WHY.

When I teach my QA classes, or when I am learning a new technique or technology I strive to teach / learn the WHY of what I am doing. The HOW then can be tweaked to whichever tool I am using at the moment. For example, in my Python class we spend a great deal of time talking about types, loops and ifs. Not because the language is heavy in details on these areas, but these are common to all languages. The goal is to not only teach them Python, but basic programming as well. If they understand WHY to use a certain construct, the language simply becomes the HOW. We spend at the end of the class a tiny bit of time looking at Perl to illustrate this.

I recommend everyone start caring more about the WHY as they go through the career. As someone who used to be a WinRunner specialist, knowing why do automate something is a certain way is much more useful long term than just knowing how to automate it.

Posted on September 21, 2007 in Quality by adamNo Comments »

A former student of mine recently pinged me on whether I could recommend a strategy for testing the non-functional elements of a product she is working on. In this case non-functional means things like tab order, field length, field masks, etc. — basically that was not listed as a feature in a requirements document. As a bit of background context, the route she was going to take was to create a massive test script which tested all the components in her application. Here is my response.


For this I would likely take this on a screen by screen basic and print the screen. On the printout I would scribble all over it the non-functional aspects I want to check and my expected outcomes. Put all of them in a binder and you have your test cases. If they needed to be in an electronic format I might then scan those and import them into whatever test management system you are using.

The requirements documents that I work from currently has a mockup of each screen right in it so I will write down on the picture the size of the db column the field maps to, the password rules, any other validation rules, where links / buttons should take me etc.

I’m not a big fan of the one-big-script route because it does not localize change. If a developer changes a single screen you are somewhat required to run your massive script, not just the steps 43 – 61 (or whatever lines up in your script). Having things on a screen by screen basis allows you can just run the specific ones.

Posted on September 21, 2007 in Quality by adamNo Comments »

The other night I was walking a class through an application which has a simple user management / credentials scheme. When you create an account, the password and password confirmation fields had the characters masked out, but when we got to the ‘ya! it worked!’ page it was displayed in the clear for the whole world to so. Instantly people started pointing out ‘a bug! a bug!’. But was it?

A trick question of course. The answer is, maybe.

I’m going to disclaim the description of what I said next with the opinion that passwords should always be masked, hashed and transmitted securely. Always. But the discussion was surrounding requirements and how your idea of what the requirements should be might not line up with the actual requirements.

So, in general, a password might not need to be always masked when

  • You warn users anywhere that the password might be shown in clear text and to use a unique password that is not used anywhere else
  • Your application has a means to ‘request your password’ and the existing one is transmitted in the clear
  • Your login process occurs over a non-secure connection
  • The password is only being displayed to someone who as identified themselves as the owner of the password

A password should be masked when

  • The place where the password is appearing can be viewed by someone other than the user (such as a log file)
  • The common place of usage is a public space (such as a library)
  • Your application is projecting an image of security

All that said, when I am testing anything that looks like a password, this is what I am looking for

  • Passwords masked everywhere
  • Password masks are always the same length regardless of the length of the password
  • Any time the password is transmitted, its is over a secure channel
  • Password is hashed when stored in the database
  • Reset password functionality does not send the existing password. Instead sends a random one which the user is prompted to change once it is used
Posted on September 21, 2007 in Quality by adamNo Comments »

The word that you are most likely to find in my bug reports is consistency (or some variant of it). These are usually in the context of different font sizes / faces being used, common location of page elements, etc.

Pradeep has just put up another nice blog post which nicely categorizes types of consistency tests / oracles into labeled buckets. He has a bit of a narrative you can read on your own, but here are the buckets (and some slightly reworded or omitted descriptions) in my order of priority.

  1. Consistency within the Product
  2. Consistency with User’s Expectation
  3. Consistency with History
  4. Consistency with Claims – Marketing personnel and management are making certain claims about the product
  5. Consistency with Statutes
  6. Consistency with Similar Product(s)
  7. Consistency with the Image – Compare and contrast the results of the test with the image that the company or stake holders have been projecting about the product or the company.
Next Page »