Posted on January 30, 2006 in Quality by adamNo Comments »

With the advent of unit testing infiltrating more of and more development groups and continued growth in both new software and quality expectations, the management of tests can become quite a complicated issue. Do I use Word? Excel? TestDirector? Custom? But regardless of the path you take, there is a common item you need to consider. Just what are you going to call your actual tests.

The two basic rules I follow when naming a test are

  1. The name must be unique
  2. The name must be meaningful

The uniqueness rule is pretty obvious both in rationale and implementation. A test’s usefulness is decreased if test and development cannot have a discussion about it and not know for certain that they are talking about the same thing. It also makes reporting a pain if you have duplicate test names and they had different results.

Making test names meaningful is a bit more tricky and requires both up-front planning and ongoing maintenance.

Every application can be broken down into smaller parts (modules), and your application cannot, I counter with “you need to rethink your understanding of your application”. According to Watts Humphrey, a 3 or 4 million line application can be usually broken down into 4 or 5 layers of these modules somewhat like this (the names are mine):

  • application
    • massive module
      • major module
        • minor module
          • feature

At this point, your exercise is to do a quick draft of this type of layout for your application. it does not have to be complete, or completely accurate. A representative approximation of what we are going for here. Note: When you go to do this for real, this needs to be accurate, so consult with a couple developers and compare their answers to get an idea of things. If you ask only one you risk that they do not actually know the structure correctly. Getting multiple answers limits that possibility somewhat.

Done? Great. As you can (hopefully) see, this gives us a nice architectural overview of how the application is logically structured. The next step is to assign a unique abbreviation to each component identified in your tree. I would suggest at least two letters long, and a max of 5. The abbreviations should try to closely represent what they are for, but that might not be possible in huge applications. But that is okay as what we are going for here is uniqueness. Even the oddest name->feature mapping will be understandable after some usage.

  • application – app
    • massive module – mass
      • major module – maj
        • minor module – mnr
          • feature – feat

The next step is to make an executive level guess as to how make tests your largest feature will be, in terms of number of digits. For example, feature is thought to have more than a thousand tests, but less than 10000, so the number of digits is 4. (Even if this number is wrong, you can just add another digit no problem?

it just makes sorting a pain — unless you whip together a script which renames all your tests of course)

The final step in naming a test is to merge everything that we have done thus far to actually name a test, separated by a _ (I suppose you could use CamelNotation, but I think that is hard to read when dealing with the sort of scale large applications present). What we end up then for the first test of a feature is app_mass_maj_mnr_feat_0001 and the final on is app_mass_maj_mnr_feat_9999.

And just like magic, you have a highly scalable (add more modules/features where appropriate) naming convention that uniquely identifies a test, and gives each test a context. This type of convention also helps you make intelligent decisions about what to test when. If a minor module has changed, you might need to test all the features under it, but not those under another module for instance.

Posted on January 26, 2006 in Books, Process by adamNo Comments »

Managing the Software Process by Watts S. HumphreyChapter 5 – Managing Software Organizations


Posted on January 25, 2006 in Quality by adamNo Comments »

Recently I participated in an software process assessment for another group in our organization. What I realized afterwards was that much like the butterfly who flaps his wings in Africa causing a chain of events that results in a hurricane in the Caribbean basin, a problem in the the software process has cascading effects to the later aspects of the process. Get too many problems early in the cycle and you end up swamping your testers in an unending maelstrom. In this case, most of their problems originated around requirements; what was or was not in, and whether the were developed appropriately. Like most issues with any maturing software process, a lot of things conspired together to make this a nightmare, but I think the primary reason was lack of a strong (or in this case any; they burned through 2 in a year) Product Manager.

A Product Manager is I think the most important person in a software development organization as everything flows from them. Sure, if there was no developers, nothing would get written, but without product management, they have no marching orders to go on. Consider them the axle point on a cart wheel. They are connected to all areas of the product and without them the wheel falls off. Which you don’t really want.

So what are the roles and responsibilities of a product manager?

  • Central point of requirements knowledge: Product management is the final word on all things requirement related, and therefore must be actively involved in any discussion surrounding requirements, including
    • Gathering (external: what competitors are doing, analysts, etc) — product management should be the ones with the closest view of market trends, be the ones who meet with Gartner to get into the Magic Quadrant etc.
    • Gathering (internal: sales, consulting) — product management’s phone number/email should be the only point of contact into a product group that sales, sales engineers, internal consultants etc have. Under no circumstance should development or testing be contacted directly by the field regarding requirements. If they are, they should be instructed to redirect them to product management because as soon as they agree to some “tiny” requirement, suddenly the who project planning exercise is undermined and you have entered the glorious realm of feature creep
    • Release Specification — every project/release has certain requirements that must be implemented; these originate from product management
    • Development — at some point a developer who is tasked with implementing a feature is going to have a question similar in form to “Is it supposed to be like this, or like that?”. By definition, the only person with a large enough view of the world is product management, so only they can answer the question. If you omit product management from the requirements development phase you run the risk of spending a tonne of effort on coding only to find out “Thats not what I wanted”.
    • Verification — Part of the testing process is Requirements Verification. Test is going to want to ask product manager for each item in the release specification “Does this do what you wanted?”.
  • Product Direction/Positioning: Product management is responsible too for roadmapping and guiding the product along its life. They also are the ones who (should) know (or have a really good guess) of what the next curve is going to be and steer the product appropriately
  • Be the face of the product: A product management job is by default a “travel required” position. Product management will need to be at all the major industry shows and analyst conferences representing the product and being it’s biggest evangelist. They could also be involved in direct customer negotiations if the company is small (few, if any sales engineers who can do the product demos) or big enough (deals worth massive money and the potential sale wants high-level product involvement)
  • Vet the bug list: The product manager should always keep an eye on the sort of bugs that are coming in from both internal and external sources. Product management is ultimate authority on which bugs must be addressed in a product. If a tester feels strongly about a bug, but the development manager disagrees, the tester should be able to make their case to product management who could agree (and therefore the bug will be addressed) or disagree (obviously then the development manager’s original assessment was correct). This situation is most often going to occour either when a tester is new to the organization and does not know how bugs are weighted yet, or during final release crunch when decisions start to be made according to the distance to release versus quality (the latter being a rant scheduled for some time in the future). Even if a bug is not brought to their attention directly, the definitely need to keep an eye on the bugs that are differed as it could be that the development manager has accidentally annihilated a good deal of product value.

I’m not sure how much of time should be dedicated to each task, but I know that some people have developed specific percentages. I think that each of these items tend to operate at different points in the cycle, so assigning percentages is way too difficult. What I will say though is that realistically, the role of product manager is too important to do part time. A product manager should manage exactly one product, and if there is a number of products marketed together as a suite, then there should be person dedicated to managing the suite (with the product manages reporting to them).

A few traits I think good product managers have are

  • marketing experience: While marketing droids may be the bane of development, it helps greatly to have product management have marketing experience. This experience will assist them in asking the right questions correct (which is key; ask the right question the wrong way and you will get the wrong answer) and plan the product’s placement in the market.
  • masters at managing email: As you might guess by now, product managers get swamped with email. especially the good ones. They must be be able to deal with this barrage efficiently and effectively. Not doing so is one of those butterfly flaps.
  • credible in the eyes of their team: A product manager who is not credible in the eyes of their team will eventually be cut out of discussions they should be involved in. “I have a requirements question, but the Product Manager never seems to have a clue, so I will ask since they are smarter.” While the reasoning may be sound and the product manager has dropped the ball a couple times, cutting them out of the loop sends the project into dangerous waters. Likely without a life jacket.

One final bit. I mentioned earlier that a product needs strong product management. By strong, I mean no-nonsense in addition to credible. If sales continues to contact people inappropriately or development continues to make product decisions without product management input, it is up to the product manager to give them a lashing. One that has the full weight of management behind it.

I’ll try to wrap this up with an analogy. Imagine your organization as the Spanish Armada. Management plays the role of Admiral, and tells the armada where they are going (to the New World!). Product Management are the ones Captaining the individual Galleons, telling their crew where to point the ship (approximately south-south-west). If the person manning the wheel is uncertain they are still heading in the right direction, then they can ask the Captain who will correct course as necessary. If a merchant was to ask the wheel man to make a detour via the Canary Islands along the route directly, they would be keelhauled (any day you can work that into a sentence is a good one I think). So too should those who do not go to Product Management any time the product is being affected by something.

Posted on January 23, 2006 in Books, Process by adamNo Comments »

Managing the Software Process by Watts S. HumphreyChapter 4 – The Initial Process


Posted on January 20, 2006 in Quality by adamNo Comments »

As promised earlier, here is a python script which will go through your code base an print out all your developer’s notes (TODO, FIXME etc); all of which must be logged in the bug system.

For a bit of irony, I could have added the following TODOs (but didn’t):

  • Pass the starting directory as a parameter rather than hard coding it in, but that’s not a limitation that will haunt most users as the path will generally be pretty static
  • Parsing of the comment to see if there is a bug number in it, and then checking the status of the bug in the tracking system (too see if the bug has been closed and the note can be removed
Posted on January 19, 2006 in Quality by adamNo Comments »

I came across another item today.

  1. Be able to fully control your machine remotes:
    • Be able to power cycle it from your desk when things go south in a major way: I know there are companies that make managed power bars which have a little embedded web server which you can use to turn on/off your machines remotely. I’m not sure how available these are for say, an 8′ rack full of 1u machines, but I wished I had one when I had to trundle down a couple floors and through 2 security doors just to push a power button. Now, if you are managing your servers in a virtualization model, some of this problem goes away; kill the process managing the virtual server and restart it. Of course, the critical point then becomes the server hosting the virtual machines, but you should not be using it for any testing activities other than being the server for the virtual ones so it should not be all that unstable.
    • Be able to get to the console without a crash cart: It used to be that each data center would have a little cart with a monitor, keyboard and mouse which was affectionately known as the crash cart which you would wheel over to a headless machine when it crashed, plug it in and fix the machines. There are now hardware and software solutions for this. One we have (partially) implemented is the Secure Console Server from Think Logical. Now if only the machine I needed to access was actually hooked up to this today… A potential problem that I do not really have an answer for at the moment is how do you connect to the console of a linux virtual server when it is running inside a windows host computer. I’m sure if I contacted a vendor they would have some story on how to do it though. Oh, and don’t forget Remote Desktop which is built into W2k and W2k3 servers.
Posted on January 19, 2006 in Quality by adamNo Comments »


While it is tempting to just leave this entry as a single word, lets use go into the subject a little deeper.

From the Test perspective:
A lot of new testers will find themselves asking this question either to themselves or a more seasoned member of the group fairly often, though it may not be phrased exactly as such. It is most commonly disguised as “I’m not getting the behaviour I expect, but I think it is my fault. Should I go talk to developer X about it?”. In any situation you get something you don’t expect there are exactly two possible reasons for it (every other reason can be rolled up into one of these):

  1. There is a bug in the software
  2. There is a bug in the documentation

By their very definition a bug needs to be logged in either case. Notice that these are pretty broad categories. I have specifically not said “end user documentation” or “customer shipping software”. It could be that your test harness is doing something funky or a design document is hopelessly out of date. Neither one of those things a customer should ever see, but they are important parts of the overall Quality of the product so must be tracked. The challenge then to the reader is to come up with something that might stall test or otherwise cause test to go elsewhere for clarification that is not a bug (that is, cannot be abstracted to the categories I have defined).

Another way of answering the question that started this all off is with another question: Am I having any issue with what I am presently experiencing? This is of course a loaded question as you would not be asking it if you were not.

From the Developer perspective:
The developer perspective is half of the inspiration for this topic being covered instead of what I was going to talk about.

A developer should log a bug every time they see/do one of the following:

  • They are implementing a new feature / bug fixing and notice a yet-undiscovered problem at the code level. Yes, it might take you less time to just fix the bug, but you must log it anyways. Why? Because how else are you going to put in your code check-in note the associated bug number. Every change commit that deals with your code base must have an associated bug or feature number. (Rules you should have for your versioning system is a topic unto itself)
  • As soon as you finish writing in your code a comment that has anything remotely similar as TODO or FIXME in it. I call these ‘developer notes’ as they are notes by developers for developers. Every TODO or FIXME in your codebase must have an associated bug number. I recommend having a standard format for these notes which will allow for easy parsing of your code by code analysis tools. Perhaps something like this would work in your environment.

    /* TODO
    user: adam
    bug: 000001
    description: to something clever instead of horrifically hacky here */

    By having it in both places, not only is product management aware of the issue you have spotted, but the next person in the code will have a bit of a pointer about how something should be implemented and/or its implementation level.  

    Once I remove the bits that are specific to our product from the script I use to see these developer notes, I will put it up for grabs and point a link to it.

But what about Security?:
The sensitivity of the bug is the other half of why this is being written. This affects both developers and testers. What if you found a bug that had some serious serious ramifications? Like say “the private key of component x is 12 bits long instead of 8”. Its a super easy fix; just add the missing 8. Should you log it? Absolutely. Would the bad guys compromise your product in minutes if they saw this? You bet. But notice how I did not say you needed to make this bug public? Some organizations have polices around their bug system such that “No security related bugs can be entered in the bug system.” This is dumb on so many levels.

  • So what you are saying is that it is a bug, but cannot be in the bug system? How are we to track it then? Email? Sticky notes? Morse code on an abacus? As soon as you track a bug a different method what you have created is a supplementary bug tracking system. But as soon as you do that then you break the rule saying that security bugs cannot go in the bug tracking system. Let the recursion games begin…
  • Security through obscurity has been proven to be ineffective time and time again. Not putting security bugs in the bug tracking system is just another form of this
  • Not putting the bug in the tracking system also fails the “What if developer X is hit by a bus tonight on the way home from their weekly curling match?” check. If the person who has the record in their head is suddenly removed from the scene, how is the issue going to be resolved?

“But I will look dumb if I log it”:
Who cares. You will look dumber if it comes up later as an issue and you say something along the lines of “Oh ya, I remember seeing that.” Besides, everyone who is new to a product, regardless of how senior they are in their QA career is going to log dumb bugs at the onset. What makes you a good QA person is to learn from your mistakes and not log eight or nine permutations of the same bug.

This was written in three different stages with three differing levels of distraction so the continuity might not be nearly as good as I wanted, but I here is what to take home:

  • Disk is cheap, so log everything.
  • Stop thinking as your bug system as a “bug” system, but more of an “issue” system (“I have an issue with feature X, and it is blah” where blah could be a requirement breach or no testing hooks)
Posted on January 17, 2006 in Books, Process by adamNo Comments »

Managing the Software Process by Watts S. HumphreyChapter 3 – Software Process Assessment


Posted on January 17, 2006 in Quality by adamNo Comments »

Recently I was asked what I would have done differently ramping up the QA group all over again. One of the points I mentioned was surrounding our hardware process. The following is a further explanation of my points.

  1. Get in bed with a hardware vendor: If you are at the point where you are ramping out a QA environment, you are also at the point where you can no longer really afford to just get some generic brand machines from the computer shop down the corner. You need to at this point commit to a supplier of hardware. By this I mean IBM, Dell or HP for Intel/AMD based machines. They all have small/medium sized business facing organizations. Sure, the upfront costs might be slightly higher, but in the long run doing this now is much easier than later. In addition to the Trust Me reason just given, the following applies.
    • You get a level of guarantee that the same configuration will be available for a set length of time. Often at small stores, the available systems are based upon what components they have on hand at the time. With large organizations, models have definitive life spans.
    • Support contracts and SLAs will be much tighter with the large organizations who have the full support infrastructure. Do you have someone you can call at your corner computer supplier at 2AM during release crunch and your RAID controller just went *POP*?
    • You can get much friendlier form factor machines from the big vendors than consumer / small business oriented stores. 1U machines should be on your shopping list.
    • Resist fun naming conventions: While it certainly is fun to name machines according Star Wars, Star Trek, Disney, species of bears, natural disasters, species of birds, video games etc. These schemes do not scale (how many bear species are there — that people will recognize as such?), do not resonate with all members of the group and due to their informality are subject to change. We have in the last 6 years used all the listed examples in our QA lab. Instead you should use un-fun but descriptive names. An example of this is from one of my newer machines: sap2k301. Lets beak it down into it’s components.
      • sa: the hardware pool that this machine belongs to (we have multiple products, each with it’s own hardware pool; largely for budgeting purposes)
      • p
      • : the purpose pool, in this case performance testing

    • 2k3: the OS — Windows 2003
    • 01: the machine identifier. This is the first machine of this OS, for this purpose as relates to the product. If you someone like Google you might want to have more digits in your identifier. This example maxes out at 99 which for our purposes is more than enough
      • Virtualize: Rather than buy lots of smaller machines then run them at 10% utilization most of the time, buy as large a machine as your budget will allow and run virtual servers on it. Not only do you have less space used in your data center (which can get quite cramped and hot) but you better use the investment by running it at say 80% load. It just happens that that 80% is really 7 server instances. Also, if you have done your deployment correctly, you could have your application completely destroy a critical OS component (say the windows registry) and your recover is simply to boot from the un-screwed image. Similarly, it lets you test on an install that has not have a dozen install/uninstalls having been completed on it. We have overlooked some nasty problems in the past by just recycling the same machine due to the time it would take to recreate it.
      • Figure out a update strategy: You need to figure out a way to apply OS and application patches to whatever system you decide on to manage machine images. There is no greater time sync than having to go out and grab a year’s worth of service packs and hot fixes and apply them to a newly brought up image/install. This should be automated as much as possible, and all the large OS vendors provide products meant for enterprise IT management that are ideally suited for this purpose. This also ensures that the machines you are testing on actually have the appropriate patch levels across the board. Nothing is more annoying than completing a test cycle only to find that you had the wrong version of some library or dll and basically have to write off a week of your time.
      • Automate OS installation/configuration: Every major OS has some way of installing itself unattended for large scale deployments. These should be implemented for your hardware to save the 40 minutes per machine it takes to install an OS. But it’s only 40 minutes. What if you had a skid of machines delivered… Having this automated makes you less reliant on the availability (and whims) of local IT.
      • Use Test machines ONLY for testing: This seems kinda obvious, but this is a trap that less mature companies often fall into. Only test on test specific machines. This means that a tester will at minimum be always using two machines. The first is the machine they access their mail, the bug system etc. The second will be the machine that the tests are being conducted on. The first machine is used to access the second using some remote terminal mechanism (Terminal Services on windows, exporting DISPLAY on unix). Ideally too, the access machine should be a laptop if for no other reason than they are portable.

Those are the ones I can think of off the top of head, but if I think of any others, I’ll add them as a new post.

Posted on January 11, 2006 in Quality by adamNo Comments »

Lets imagine a company uses a bug severity of four levels for this post.

  • Critical – Bug renders product/feature unusable and testing cannot proceed on one or more features
  • Serious – As critical, but a work around exists allowing for restored functionality or testability
  • Medium – Bug effects functionality, but only in a limited portion of a feature; the remainder of the feature is usable/testable
  • Low – Bug does not effect functionality (cosmetic items fit in here)

When a person logs a bug in their product, they assign it a severity based upon some set of criteria. Usually it is one of the following.

  • A defined set of requirements – ex: It is critical if it brings either the machine or testing to a halt
  • Area expertise – ex: a security guru could recognize the ramifications of something seemingly innocent
  • Product expertise – ex: a tester has been with a particular product for 6 years and knows pretty much all the effects a bug will have on the product
  • How much it frustrates/annoys them – ex: A small bug might get logged at a higher severity if the person is frustrated

Ideally you would want to have a combination of the first three and as little of the last as possible.

From there, someone will assign it a priority. This is the order in which the person assigned to the bug is to go through their bug box. Deal with all higher level bugs before moving to the next level. Notice that the assignee does not address their work based upon the severity. This leads us to

Adam’s First Law of Bug Levels: The level at which you submit a bug at is irrelevant.

This is almost heretical, but makes sense if you think about it. The person (or group of persons) who is responsible for setting the priority needs to understand the product in order to set it. They therefore use their knowledge of the affected system and their own opinion to give it a weighting not the severity that it was logged at. It could be argued that using the severity as an input to that decision could skew the decision incorrectly. ex: “Well, Adam says it is a high and he is an outstanding tester” could taint if a bug is a medium but I have some agenda I am trying to push or I could just be cranky.

Over time, a pattern will evolve over time regarding which bugs are addressed and which are not as biases become recognized in prioritization, staffing skills are known and observation of the bug system. Once this pattern is known, the next Law of Bug Levels starts to apply.

Adam’s Second Law of Bug Levels: There are really only two severities for bugs; Fix or Don’t Fix (this could also be “I need this fixed” and “I can live with it” but “Fix” and “Don’t Fix” are catchier)

This one can send managers into fits. Critical and Serious are by very definition in the Fix category, so why bother having to decide whether it is a Critical or Serious (it is not often cut-and-dry). Medium and Low have a bit of movement as to which severity they get, but remember that this Law comes into effect over time and after a precedent is created so the choice is likely to be the correct one.

Much like severities, most places have at least 3 priorities: High, Medium and Low (or some variation thereof). You might see where I am going with this.

Adam’s Third Law of Bug Levels: There are really only three priorities for bugs: Fix Now, Fix Later and Never Fix

What is the difference between two bugs assigned to you at the same priority? Who knows? Ultimately you will need choose one to Fix Now, and leave the other for Later. A common argument against this is that you cannot easily load developers up with bugs to fix. Great! Giving your developers a tonne of bugs in their queue is just distracting. At most give them 3, but ideally you should give them 2: a Fix Now and a Fix Later. If something should come in between the time you last assigned them a bug that needs to be addressed quickly, remove their Fix Later and give them the new one as a Fix Later (pulling them off what they are fixing now is not worth the loss of productivity that results in most cases). This way they always know what they are to be working on at present, and what they are working on next. It is irrelevant what they are slated to work on 6th or 7th as that is likely to change.

Next Page »