Posted on February 28, 2006 in Quality by adamNo Comments »

I’ve been carrying a bit from the Globe and Mail for a number of months now. The bit is a summary of Jakob Nielsen’s 2005 reader survey on poor website design. Here is a summary of the summary (with my commentary).

  • Legibility – Avoid small fonts, fonts that cannot be resized and low contrast between the text and background colours. Resizing of fonts is going to be a big issue/challenge for web design in the coming years. As boomers age, there is going to be a massive number of people whose eye sight is starting to worsen. If they cannot read your content clearly, they are going to get it from another outlet. Unfortunately, a lot of web design is based around ‘text — of size x — goes here, image goes here’. If size is really x * 5, your time making a your page pretty has been wasted.
  • Odd Links – Make links obvious, using coloured, underlined text. Differentiate between visited and unvisited links. This method is the de facto standard of displaying links. Yes, you can tweak your page to display without underlines, and with the same colour as the text, but why? If presentation is really an issue, change the text colour to something that complements your existing colour palette, but leave the underline.
  • Flash – “Most people equate animated content with useless content.” Blame it on the bubble, but this is so true. Not only is there this perception, but then there is the issue of having the right version of the right plug-in which can be a nightmare unto itself. Also, how many people see a flash splash page or something load and immediately look for the ‘skip intro’ link. I do every time — unless of course I am going to the site for the flash splash; some web development companies have very pretty splashes
  • Content not web friendly – copy should be short, scannable and to the point, not just marketing fluff. But if you insist on having the marketing fluff at least include a boilerplate or such at the beginning that allows you to skip fluff if you want but still derive value from it.
  • Bad site search – Invest in better software if your searches do not meet your user’s needs. I am pretty sure it costs very little, if anything to integrate Google or Yahoo into your website, so the investment is likely just going to be time to make sure that all pages are spiderable and correctly added to the index. Now, this strategy doesn’t really apply to intranet type scenarios where haveing all your corporate secrets in the main index would not be a good thing, but thats where products like the Google Search Appliance come in. Oh, and they look really nice too.
  • Browser incompatibility – Firefox, Opera, Safari, blah, blah, blah. From a test perspective this can be a big issue as it is yet another permutation to the massive matrix of combinations you should test. Here is my formula. Take the current browser market shares for the top couple and use that as a base weighting for testing. From there, skew the percentages based upon your target market. For example, if you are a site that caters to open source or linux, you will want to up your Firefox percentages. Similarly Mac sites will want more Safari coverage. Oh for the simple days between Browser Wars I and Browser Wars II.
  • Cumbersome forms – Try not to use forms, but if you must, make them as minimal as possible. Personally I think the end of forms as a means of collecting data is towards the end of it’s useful life. One of the first things we taught our daughter about the internet is to always lie when they as for information. Not only is this safer, it also insulates from spam/data harvesting. If you cannot trust the information that you get from a form, the form is useless and should be removed. If you must have a form though, use the tabindex attribute for the input tag to bounce you through only the mandatory tags. I’m lazy. I don’t want to have to navigate through 100 demographics fields that I’m not going to fill in. Nor do I want to move my hand all the way over to my mouse to skip by them all.
  • No contact information – Giving a physical address for your organization gives it an instant boost in credibility. If you have a domain, whois will give you the business contact information which has an address. If you have a phone number, Google or any number of other reverse phone lookups will give you the address. If my mom knows how to do this, I’m guessing a large majority of the internet population does, or will soon. Even if you don’t put up an address because you are working out of your livingroom or parent’s basement, call it Suite 200 or something. The added benefit of this is that when you do move into a new proper office you can announce it with great fanfare and turn it into a mini marketing event.
  • Frozen layouts – Pages should resize according to how they are being displayed be it bigger or smaller screen real estate or printing on A4 vs. 8.5×11 paper. Here is the perfect example. HP’s main web site. Regardless of how you adjust your browser size, the content will always be the same size and tucked up in the top left corner. As the average monitor size increases, so too will the average screen resolution. Make your page dynamic, or at least dynamic enough to not fill the screen which emptiness.
  • Inadequate photo enlargements – Let users see close-ups of your product. This is of course starting to be thwarted by browsers resizing images to fit their window size automagically for you. So even if you provide a monstrously sized close-up of your product, the user might still have to hover the mouse over the bottom right of the image, weight for the ‘I really did want a large picture, thank-you’ button and click it.

I have been inadvertently following a list very similar to this any time someone asks me to test their website. Good to know I wasn’t too off base on this.

Posted on February 26, 2006 in Quality by adamNo Comments »

One of the traits of applications that call themselves Web 2.0 is the availability of an API which allows you to integrate it into your own service (generally called a mashup). Given the public facing nature of the API it needs to be rock-solid from a testing perspective and the easiest way to get that comfort is to do what is known as eating your own dog food. In this case it means “using the documentation that is available with the API, make your own mashup for testing purposes”.

Most Web 2.0 APIs work by sending something specific to a specific URL then dealing with the returned XML appropriately. This means that the API is magically language independennt. For this reason, I would recommend testing it ina language other than the one the application was originally written for and in a language that is fast to write in (python, ruby, perl).

I had hoped to illustrate how easy it is to test an API from a local company (such as Nuvvo or BubbleShare) but neither have released APIs to their services. Instead I used FeedBurner.

The script I whipped up in an hour is below the cut or here connects to the FeedBurner server and collects the stats for a feed and displays them. It has had very little architectural thought, and limited testing, but from an example perspective I think it is sufficient to get the point across. Clearly in a full-bore test scenario you would want more features, like the rest of the API implemented for instance and checking whether the back-end servers the API is manipulating have been manipulated correctly.

One other side effect of implementing the API testing in Python (or again, Perl or Ruby) is that you can instantly release it as a module to the associated community and have native support for your product in that language, which increases your total market, which increases your revenue, which increases…

And of course, if you want to contract me to exercise your API in Python (with the aforementioned ‘more thought’), email me.

import httplib, base64, xml.dom.minidom

class FeedBurner(object):
    def __init__(self, id, password): = id
        self.password = password
        self.fb = None
        self.headers = {}
        self.base_api = ""

    def connect_http(self):
        """basic authentication over http"""
        self.access_method = "http"
        tmp = "%s:%s" % (, self.password)
        self.headers["Authorization"] = "Basic %s" % (base64.encodestring(tmp).strip())
        if not self.fb:
            self.fb = httplib.HTTPSConnection(self.base_api)

    def connect_https(self):
        """basic authentication over https"""
        self.access_method = "https"
        tmp = "%s:%s" % (, self.password)
        self.headers["Authorization"] = "Basic %s" % (base64.encodestring(tmp).strip())
        if not self.fb:
            self.fb = httplib.HTTPSConnection(self.base_api)

    def connect_params(self):
        """credentials in each GET"""
        self.access_method = "params"
        self.auth_message = "user=%s&password=%s" % (, self.password)

class AwAPI(FeedBurner):
    awapi_url = "/awareness/1.0"

    def __init__(self, id, password):
        FeedBurner.__init__(self, id, password)

    def GetFeedDataURI(self, uri):
        getfeed_url= "/GetFeedData"
        if self.access_method.startswith("http"):
            args = "uri=%s" % uri
        elif self.access_method == "params":
            args = "uri=%s&%s" % (uri, self.auth_message)
            raise SystemExit, "unknown access method"
        self.fb.request("GET", "%s%s?%s" % (self.awapi_url, getfeed_url, args), "", self.headers)

    def GetFeedDataID(self, id):

    def processFeedData(self):
        self.feed_data = {}
        resp = self.fb.getresponse()
        rsp = xml.dom.minidom.parseString(
        # stat is either "ok" or "fail"
        stat = rsp.documentElement.getAttribute("stat")
        if stat == "ok":
            for feed in rsp.getElementsByTagName("feed"):
                furi = str(feed.getAttribute("uri"))
                if furi not in self.feed_data:
                    self.feed_data[furi] = {}
                entry = feed.getElementsByTagName("entry")
                if len(entry) == 1:
                    self.feed_data[furi]["date"] = str(entry[0].getAttribute("data"))
                    self.feed_data[furi]["circulation"] = str(entry[0].getAttribute("circulation"))
                    self.feed_data[furi]["hits"] = str(entry[0].getAttribute("hits"))
                    raise SystemExit, "incorrect number of entries for feed '%s'" % furi
        elif stat == "fail":
            err = rsp.getElementsByTagName("err")
            if len(err) == 1:
                code = err[0].getAttribute("code")
                msg = err[0].getAttribute("msg")
                raise SystemExit, "recieved code %s - %s" % (code, msg)
                raise SystemExit, "incorrect number of error codes"
            raise SystemExit, "'stat' is an unknown value"

    def displayFeedData(self):
        for uri in self.feed_data.keys():
            print "Summary data for feed: %s" % uri
            print "   Date:        %s" % self.feed_data[uri]["date"]
            print "   Circulation: %s" % self.feed_data[uri]["circulation"]
            print "   Hits: %s" % self.feed_data[uri]["hits"]

class FlareAPI(FeedBurner):

class MgmtAPI(FeedBurner):

class MobileAPI(FeedBurner):

if __name__ == "__main__":
    uri = "your feed uri"
    id = "your feedburner login"
    password = "your feedburner password"
    f = AwAPI(id, password)
Posted on February 23, 2006 in Quality by adamNo Comments »

Often bugs can be masked if you do not start with a perfectly pristine environment to work in. Getting a clean environment however can be sometimes quite onerous; for example, uninstalling and reinstalling a database. While you must do that a couple times during the testing (and development) cycle, that isn’t always the most efficient use of time. One trick for cleaning a database is to install a stored procedure on the database that drops the tables/triggers/indexes/procedures your software uses.

Here is a procedure I use to to drop the tables one of my products creates. It is in Transact-SQL so would need some tweaking to convert to PL/SQL for oracle, but there is nothing specific to SQL server I don’t think.

/* set our database -- change adam to whatever your db is called*/
use adam

/* delete our procedure if it exists already */
if exists(select name from dbo.sysobjects where name = 'drop_adam_tables' and type = 'P')
 		drop procedure dbo.drop_adam_tables

/* recreate it */
create procedure dbo.drop_adam_tables
 	@prefix varchar(5)
 	declare @tbl varchar(40)
 	declare @combined varchar(40)

  	/* suppress the output of the inserts and drops */
 	set nocount on

  	/* build a temp table which has our table names */
	create table #adam_tables (
 		t_name varchar(40)
 	insert #adam_tables (t_name) values('changed_table_name_1')
 	insert #adam_tables (t_name) values('changed_table_name_n')

  	/* systematically remove all our tables */
 	declare tbl_cursor cursor for
 		select * from #adam_tables
 	open tbl_cursor

  	/* fetch first row */
	fetch next from tbl_cursor into @tbl

  	/* loop and remove */
 	while @@fetch_status = 0
 			set @combined = @prefix + '_' + @tbl
 			if exists(select name from dbo.sysobjects where name = @combined and type = 'U')
 					execute('drop table ' + @combined)
 			fetch next from tbl_cursor into @tbl

  	/* cleanup */
 	close tbl_cursor
 	deallocate tbl_cursor

As you can see, it only does tables (because that’s what I needed…), but could be expanded to other database objects by adding another column to the temporary table then building the execute command appropriately. The

set @combined = @prefix + '_' + @tbl

is because our product prepends a user specified prefix to each table it creates. To run the procedure just requires this command

execute drop_adam_tables your_prefix

via eclipse or python or anything else to have your database at a clean state.

Posted on February 22, 2006 in Quality by adamNo Comments »

This one is from the ‘Practice what you Preach’ category.

One of the things I have done at my current employer is write a full test application and associated test scripts in python. When reporting the results to the appropriate persons recently, I had a couple that were not passing, but I could explain away. To make a moderate length story short, the development manager sent me a note to confirm that those failures were due to ‘flakiness in the test framework’ which caused me one of those lightbulb moments. How can we (I), as (a) QA professional push our developers to create unit tests to robustify their codebase we (I) do not also. So, as of this week, I’m unit testing the testing framework. (‘Who watches the watchers’ / ‘Who tests the tests’).

This led me to the problem of how do I structure, and more importantly, run all my unit tests; below is what I came up with.

Structure – I talked briefly about the structure of my scripting framework in my post about Sync ‘n Run, so you can have a looksee at it to familiarise yourself. Essentially, the guts of what the tests actually do are in modules stored in $FRAMEWORK/lib ($FRAMEWORK in this case refers to wherever it is on disk as it is not tied to a particular directory). I decided to fully segregate the testing code from production code, so any unit tests are stored in their own module prefixed with test_. This leads to the unit tests for the ldap_manipulation module to be stored in test_ldap_manipulation.

Unit Test Module – Once I had where things were going to be, I needed to figure out what the unit tests were going to look so I could run everyone with a single command. This is likely best illustrated with a sample (which was the basis for an actual bunch of unit tests, but I’ve obscured it to remove all product specific stuff).

import unittest
import dummy
import xml.dom.minidom

# this is mandatory for all unit test files every test suite (for which there is one per class) is appended here
suites = []

class custom_props(unittest.TestCase):
    def setUp(self):
        self.defaults = {"foo": "bar=true"}

    def test_missing_custom_props(self):
        """custom_props is not a mandatory item, test_dict should have default values"""
        test_xml = "foo"
        test_dict = {}
        dummy.handle_custom_properties(test_xml, test_dict, self.defaults)
        for default in self.defaults.keys():
            assert_(default in test_dict.keys(), "Missing default tag %s" % default)
            assertEqual(defaults[default], test_dict["url_tag"], "default tag %s has wrong default value" % default)

    def test_has_custom_props_with(self):

    def test_has_custom_props_without(self):

# create a test suite based upon all methods of custom_props that start with "test"
suite_custom_props = unittest.makeSuite(custom_props, "test")

# add it to the master list of suites

# here we create a module wide suite and call it 'all_suites'; the name is important and must be the same for each module
all_suites = unittest.TestSuite()
for suite in suites:

Execution – Now you will see why ‘all_suites’ is important. This is my controller script which will collect all the suites that are stored in the various modules ‘ all_suites, make one massive module then run it. Note that this script resides in $FRAMEWORK/bin, thus the bit of path trickiness for os.walk().

import os, os.path, sys
import unittest

# master suite
suite = unittest.TestSuite()

# dirs that contain (potentially) interesting stuff
interestings = ["lib", "testscripts"]
for interesting in interestings:
    for root, dirs, files in os.walk(os.path.join(os.path.split(os.getcwd())[0], interesting)):
        for f in files:
            if f.startswith("test_") and f.endswith(".py"):
                test_module = os.path.split(os.path.join(root, f))
                if test_module[0] not in sys.path:
                buncha = __import__(test_module[1][:-3])

runner = unittest.TextTestRunner()

Happy unit testing.

Posted on February 20, 2006 in Quality by adamNo Comments »

I found this ppt on virtualization from the 2004 MS Tech Ed event. Seems like a good primer for those who go glassy-eyed when I start talking about it.

Posted on February 18, 2006 in Quality by adamNo Comments »

One of the only persons who comment here is Grig Gheorghiu. Recently, he had a couple posts regarding the setuptools module for Python (here and here). While I think setuptools is interesting from an opensource project deployment method, it is definitely overkill for deployments in a commercial test environment.

I can deploy our full automated testing environment in under 5 minutes on any of our supported platforms with just one command — ‘p4 sync’. This also allows me to easily propagate any changes to all machines just as easily.

Of course, getting to this level of deploy ability certainly has a bit of front-loaded work…

Lets say for the sake of argument, your testing area in whatever VCS you are using (you are using one, right?) looks something like this

   <nice structure as per the guidelines here>

The first thing you need to do is add a couple directories…

   <all your original installers/tarballs go here for archive purposes>
(platform in this case is a unique identifier for an OS (solaris, linux64 etc))

To save time, omit this from all your clients when deploying. These are only needed when building out your platforms (next)

The next step is the most time consuming; especially if you need to build something like python-ldap which has a billion dependencies. What you need to do is compile everything and install it using platform_n as your install root. Any by everything I mean everything your testing requires including python (if your tests are written in python that is), but not libraries that are part of the system by default. You should end up with something like this


This whole structure needs to be added to VCS. To save time when deploying, omit the platforms from your client that are not related to the deployment target.

Finally, what you need to do is create a wrapper for your to set the environment correctly to work with your new little sandbox. One thing that will make your life easier is making everything relative to the root of all theses directories. This wrapper then calls your main test launcher. The wrapper I use (on the unixes) can be seen here.

From there it’s a couple hours figuring out all the path and package dependencies you forgot were built into your tests. Then you get to do it for all your supported platforms. As I mentioned, its a fair bit of work initially, but I think its worth it since

  • you can turn any machine into a properly deployed testing machine in a matter of minutes
  • you have absolute control over the environment
  • you are (less likely) to run into version specific bugs on different platforms as you have the same version across all platforms
  • if you rev a version of a module you are using or patch something then you do not have to rebuild and redeploy; just sync with VCS
  • anyone can deploy the environment
  • does not require root/sudo permissions
Posted on February 16, 2006 in Quality by adamNo Comments »

One of the truisms of modern software is that the end user is just as likely to be a non-english speaker/reader as they are to be one. As such, a rather large can of worms gets opened up regarding L10n (Localization). L10n is difficult to get right from a development standpoint, but dead-easy to test. All you need is a simple python script and a couple hours to look at all of your screens. Okay, that’s a bit of a simplification, but if you do that then you are most of the way home (L10n in exceptions/logs requires code analysis, but you can modify the script discussed in the developer notes discussion to do that).

Anyhow, back to the topic at hand; testing L10n. There are a few simple items that L10n testing should looking for.

  • completeness – everything that should be flagged as localizable is
  • branding – typically, stuff that are trademarked or are related to a specific brand are not marked as localizable
  • sentence fragments – this is a big one from a contextual standpoint. Every sentence, or better still, should be one “resource”. Building a sentence in fragments in the code then doing something with it almost guarantees that your translators are going to get the context wrong for some part of the sentence

Notice how correctness of information is not there. That is the responsibility of whomever you enlist to translate. But you can certainly help things along by following the last point above.

The way we test L10n is through LOUD, named after the fact that typing in all caps online is interpreted as yelling, which is of course loud. A LOUDed application will look horrific, but clearly points out violations of our rules. ^THIS IS WHAT A SENTENCE LOOKS LIKE LOUDED.$ The ^ marks the beginning of the block, a $ marks the end and the content itself is in uppercase.

So lets LOUD. The class below is taken straight from the java api documentation on resource bundles.

public class MyResources extends ListResourceBundle {
    public Object[][] getContents() {
        return contents;

    static final Object[][] contents = {
        // LOCALIZE THIS
        {"OkKey", "OK"},
        {"CancelKey", "Cancel"},

With a bit of python (which can be found here), you get

public class MyResources extends ListResourceBundle {
    public Object[][] getContents() {
         return contents;

    static final Object[][] contents = {
        // LOCALIZE THIS
        {"OkKey", "^OK$"},
        {"CancelKey", "^CANCEL$"},

All you need to do then is load this customized message bundle and verify your application. You could even go so far as to have multiple message bundle formats for different locales. German could be LOUD, Japanese could be CrAzY etc. by just tweaking the script a bit to fit your class heirarchy.

Posted on February 12, 2006 in Quality by adamNo Comments »

Another “I was asked the other day” post. This time, the topic was around test documentation.

Which documents comprise a mature set of test documentation?

  • Project Test Plan – This document outlines the high-level strategy from a testing perspective. Each feature is discussed briefly as are the resources required by the project (staffing, hardware, software). The primary audience of this document is not test, but other people/groups in the organization who do not need to know the nitty-gritty details, but have input on the overall direction the testing will take. If there are any large test-support tasks happening during the duration of the project, they should also go here. A couple test-support tasks might me “use a new hardware management strategy” or “automate all new features”.
  • Feature Test Plan – This document is specific to a single feature and outlines the strategy for testing a specific feature and is the most important document for the individual tester as it puts them in the right frame of mind to approach the problem. Included is a detailed description of the feature and description of the approach(es) to testing it. Example: The display of CGI generated information appears in all supported browsers. Ultimately, any tester should be able to be given this document and achieve good test coverage. Nowadays, an automation section should also be included which outlines how a feature could be automated; even if it is not going to be at this time. Or even more importantly, what obstacles exist to prevent automaton which would be the start of a conversation with development.
  • Feature Test Cases – Now these are where the nitty-gritty details are. I'm of the school of thought that they should be detailed enough to not allow for an confusion regarding the goal of the test, but not so detailed that a 5 year old could do them. If a feature has all it's test cases completed, then it has 100% test completion — as we know it now. The last part is important. It could be that a new attack vector was discovered or new feature x has a cascading feature effect on only feature m which will cause you to create new test cases. This is a big topic, so I'll (try) do a separate post on test cases in the near future.

At this point you might be thinking “well, show us an example of each”. That is no simple task really. What might work in my environment might not work in yours. Another problem this might create is that people people could just modify it by changing the names of the features etc. There is nothing more dangerous than someone armed with a template who does not understand why they are putting certain information in. This is especially true it test. If your test cases are incomplete or inaccurate, you lose the auditability and accountability that is necessary in testing. You also lose the ability to respond to “did you test x?” when a bug from the field comes in. If your strategies are missing or not developed, then every time a tester approaches a feature there is a large change that they will do it differently. This seems like a good thing from a breadth of coverage perspective, but you also lose all your previous coverage this round.

“But all this documentation is overhead my schedule cannot afford!” You schedule has to afford it. You will never have enough time in the schedule to do everything you want. But you must have a well developed test documentation set. Not only because I say so, but because

  • it allows you to ramp up new members to the test group quicker than without
  • it allows for business to continue as usual if someone leaves (or is hit by a bus)
  • it allows you to outsource some of your testing should you chose that model at some point in the future
  • it allows you to use non-testers in your testing which can be a useful method of checking usability
Posted on February 12, 2006 in Uncategorized by adamNo Comments »

I’ve created an account on FeedBurner to produce an RSS feed of the content here. Please use it instead of the built-in WordPress RSS.

Posted on February 8, 2006 in Quality by adamNo Comments »

There will always exist between the worlds of development and test a certain degree of friction. It just comes with the territory of being employed to point out the flaws another group made (which ultimately is what test does). This friction is good and keeps each other honest. Development cannot get lazy and let their output quality drop and test should be held to equally high standards. Things start to devolve however as soon as “We’re on the same team here” gets uttered. Normally it is said by someone senior (as in “responsible for project delivery dates”) in the development team within a month of the scheduled release date but could conceivably be said by test as well.

Let me translate that sentence for you. “Please stop finding problems with the code because it makes us feel/look bad and puts the ship date in jeopardy.” To which I say that is just too darn bad. Perhaps if you had done proper requirements development, design and unit tested each component as it was incrementally developed we wouldn’t be having this discussion. It also instantly aligns development into the “we don’t care about the quality of code, just that it keeps our on-time streak alive” camp, which may be accurate, but it is not something you want to advertise. Nothing gets under the skin of a tester than disdain for quality. These are people whose very purpose in life (or at least the work part) is to improve the quality of software. If a development manager openly shows schedule bias it becomes my goal to not only release the product with a high level of quality, but to make sure the development group works right up to the last minute.

Of course, this is all from the bias of someone who tests for a living. I suspect however that the marketplace is starting to also shift bias to overall quality; educated by CNN of all places. With the increase of more pervasive internet connectivity and the associated malware one can get online, there are more and more stories on the news about “critical hole in . apply patch from by “. This is getting the general public used to the idea of poor software quality. Eventually however, there will be a shift in consumer preference from “good enough” to “good”. It has already started I think, at Microsoft; the biggest of big in terms of traditionally poor quality. The Blue Screen of Death has transcended the geek domain and is now common parlance. Now Vista is way late and no one (except maybe the stock analysts) are screaming about it. Why? Because MS has said that they are keeping it in-house until all the kinks are ironed out. In other words, we’re boosting the quality.

So lets wrap up this little discussion with another analogy. Imagine your organization as a football team. Development would be the offensive line, and test is the defensive line. The overall goal of the team is to win the game (release the product, start collecting revenue etc). The objectives of the two teams are different however. (this is the important part) The offensive line’s purpose is to score as many points as possible (develop as many whiz-bang features) in the time alloted (by the ship date). You have all your high-paid talent (QB, RB etc) here that hurts when they are hurt. How resilient is your development team from losing a senior member or two? Bill Gates has said that MS is nothing if they lost the top 20 smartest people. The defensive line is to stop the opposing the team (bugs) from getting through. These people are not so much in the limelight but are just as (if not more so; spot the bias) important. A great defensive team can stop people all day, which is what a test team should do. Keep finding bugs. If the game / release goes into overtime, that’s fine. So long as the game is won in the end (or the product eventually ships and is of acceptable quality).

Okay. it needs some tuning still, but for now it will have to stand as it is for now.

Next Page »