Posted on July 24, 2008 in Python, Quality, subversion by adamNo Comments »

I’m going to be implementing the buddy system for changes to only part of our subversion repository. Essentially, anything in trunk needs to be buddied which resulted in needing to modify the buddy precommit script to something more robust than bash. This version will let developers work in their own private branch and be able to do lots of little commits without needing to have a buddy session but as soon as they try to merge into trunk they will need to be buddied.

#!/usr/bin/python
""" Make sure that the log message contains a buddy message """

import sys, os

REPOS = sys.argv[1]
TXN = sys.argv[2]
SVNLOOK = sys.argv[3]
care = "false"
found = "false"
projects = ["our_project_1", "our_project_2", "our_project_3", "our_project_4"]

log_stream = os.popen('%s dirs-changed -t "%s" "%s"' % (SVNLOOK, TXN, REPOS))
for line in log_stream.readlines():
    for project in projects:
        if line.lower().find("software_projects/%s/source/trunk" % project) != -1:
            care = "true"
            break
log_stream.close()

if care == "true":
    log_stream = os.popen('%s log -t "%s" "%s"' % (SVNLOOK, TXN, REPOS))
    for line in log_stream.readlines():
        if line.lower().startswith("buddy:"):
            found = "true"
            break
else:
    sys.exit(0)
log_stream.close()

if found == "true":
    sys.exit(0)
else:
    sys.exit("All commits need to have been buddied. Syntax:\nbuddy: buddy name")

It is certainly not an airtight policing solution and can be easily gamed, but it is not supposed to be. It is just a nice little nudge along the direction we want to be going in.

Posted on April 30, 2008 in Python, Quality by adamNo Comments »

One of the first things I learned in testing is to use ‘real’ test data. First, a story.

My first job as a tester is was verifying financial software; typically for reporting capital positions to central banks. So there I was, on-site testing one for a Caribbean Trust company (think Tax Haven for wealthy Canadians) and using numbers I could nicely (easily) calculate in my head or scrap piece of paper as input. Things like $10, $5, $7.50, etc. Of course, at some point the person in charge of the company saw me do this and freaked. And as it turned out, rightly so. To open an account there takes a large monetary commitment, so their capital positions were many, many more digits than I was using. He was appeased, though still skeptical, with numbers like $100000000.00, $50000000.00 and $75000000.50.

This is ‘real’ test data because it adheres to its internal rules. All data has rules. Even ‘free form text’ has rules. The trick is of course figuring out what they are.

The good thing about rules, is that once you know them, you can exploit them.

I’ve been around a new testers (or people who have been temporarily conscripted to be testers) enough to have notice that there are patterns when creating test data. Take a ‘name’ field for instance. A new tester will often use their name first, their spouse’s name next, then their kids, the rest of the family, characters from TV shows and ending up with movie characters then they get stuck. The trap they have stumbled into is that while they did create data that met the rules of the field but their thoughts were influenced by the field label (‘name’).

Let’s pretend these are the rules around the ‘name’ field:

  • minimum 2 characters
  • maximum 60 characters (that is the column size where it will e stuffed into in the database)
  • spaces and hyphens are acceptable
  • case is preserved in the data base

But doesn’t ‘dofdsiiIOIDFk’, ‘dsklfjewojf-k’ and ‘dsfjslkfjl sjflksjdfiew’ also satisfy those rules? Of course they do, they are just hard to pronounce. Guess what? The system doesn’t care. All it cares about is that it is getting something that passed a set of rules that describe it. Once you have this epiphany you can start to interesting test data generation.

I’m not sure whether this is the fuzzy line between model-based automation or dynamic data driven testing but the theory is that you have the script do all the thinking for you. Here is a python script which will create unique test data forever (or at least long enough that it might as well be forever).

import random, string

name = []

valid = string.letters + " -"
min = 2
max = 60

how_many = random.randint(2, 60)
for x in xrange(0, how_many):
	name.append(random.choice(valid))
print "".join(name)

Here is a sample of what it generated

  • sWZidlWaWQ
  • EIMZpdFvYzhZINKQoByWWVxbqGXhhIU gp-FZR neMIgZfIaOsn
  • cdVaKADxlDJxABlMCF GmpqmyvQThDCUnLjfWp
  • LRDSYSroV

Great. So what?

Well, now you can use your brain for thinking up interesting scenarios to test, not test data. The data is often just a means to an end in most cases.

You could also make this a function in a module some place and have your automation rig call it for test data instead of some hard coded value? Now you’re really getting somewhere.

As a summary…

  • All test data has rules
  • Anything that meets those rules is ‘real’
  • Rule identification can be hard
  • Once you know them, you can exploit them
Posted on January 29, 2008 in Python by adamNo Comments »

One of the goals of my metaframework was to let the developers write their own Selenium tests. Since we’re a java shop, this means it has to support JUnit. My boss wanted me to write the whole framework in Java, but well, I like Python better. Realizing that I’m likely not going to convince the entire development team to switch from the dark side to Python, I did it in Jython.

Below the cut is the annotated Jython code for compiling, running and getting the results of a JUnit test.

(more…)

Posted on January 28, 2008 in Python by adam3 Comments »

This last week I’ve been working on my metaframework for selenium (more on that in a different post) which has been pretty fun. It’s been too long since I had to exercise my brain at work. Unfortunately, it has also been pretty frustrating at times. The source of the frustration? Why, 2 bugs in Python’s unittest module — which is included as part of the standard library so in theory should be pretty well tested.

Bug 1 – You cannot load tests from a module (using unittest.loadTestsFromModule) that is not directly inherited from unittest.TestClass. In other words, even though unittest is all nicely divided up into classes, you cannot leverage inheritance to organize your test code. Here is an example:

test_module_A.py

import unittest
class ParentTestClass(unittest.TestCase):
    pass

test_module_B.py

import test_module_A
class ChildTestClass(test_module_A.ParentTestClasss):
    def test_Something(self):
            pass

With the way things are currently, unittest.loadTestsFromModule(test_module_B) won’t find any tests. This is because it is checking the classes in the module to see if they are a subclass of ‘TestCase’. Due to some weird scoping rules, this doesn’t work. It should be unittest.TestCase instead.

Crazy enough, in Jython, this has been fixed in one spot, but not another in unittest. CPython is still affected by this in both places. Here are patches for Jython and CPython (2.5).

Bug 2 could be argued either way whether or not it is a bug but I think it is. In Python there are old-style classes, and new-style ones. New-style classes have been around that in a temporal sense they are no longer new; only in comparison to what was before. All modules in the standard library should be (in my opinon) new-style. Surprisingly, the classes inside unittest are not. This means that you cannot use built-in methods like super() to reach into a class’ superclass.

To me, this is the perfect example why open source apps get a bad rep in corporate environments. At work, when I find a bug I look around to see if there are any other ones in the vicinity. Similarly, when developers are fixing it they look to see if it crops up elsewhere too. And when I verify their fix, I try a couple other suspect places as well. In opensource, the itch gets scratched; the problem is that there might be another itch hiding around the corner.

Update: My CPython patch was rejected, by Guido. Seems I need to go back and re-examine how namespaces work. Oh well. My Jython patch makes things work the way I want them to though, which is the only thing that matters. 🙂

Posted on December 13, 2007 in Python, Quality, Selenium by adamNo Comments »

If I come close to be a ‘programmer’ in any language, it is in Python. Actually, I teach Jython which has all the simplicity and power of Python but lets me reuse 12 years worth of Java code when I don’t feel like re-inventing the wheel.

So, by choosing Jython I get: Python and Java in one bundle.

Part of my job is to automate testing where I think it will be of future benefit (or where it lets be be lazy at some point in the future :)). This automation often takes the form of Selenium scripts. But because I do not like working within the limitations of Selenium Core, I jump straight to Selenium Remote Control so I can make better use of oracles, logging, temp files, etc. The common paradigm for writing Selenium RC scripts is through the languages *unit framework.

So, by choosing Jython I get: Selenium scripts using Python’s unittest module and Java’s JUnit in one bundle.

Jython needs at least version 1.4 of Java in order to function, but works iwth 1.5 and 1.6. At one point I had myself convinced that I needed to run the same version of Java in my metaframework as our application was running, but I have overcome this bit of nonesense. By doing that I allowed myself to start running with 1.6 which means (a lot of things, but most importantly) I can make use of JSR-233: Scripting for the Java Platform.

JSR-233 provides hooks for (currently) 25 scripting engines to be accessed from within your native Java (Jython) code. This is incredibly trick. Want Groovy? Sure. How about Awk? It’s there too. In my context however, what I want is Ruby which I can do via JRuby.

Let’s catch up again. By choosing Jython I get: Selenium scripts using Python’s unittest module, Java’s JUnit and Ruby’s unit::test module in one bundle.

In order to get unit::test based scripts to behave, you used to have to jump through some hoops. Those hoops have in the last month been removed, so you will want to get the JSR-233 code from at least 12/12/2007. Many thanks to Yoko Harada on the JRuby mailing list for not only identifying, but providing fixes for the hiccups I experienced trying this.

Now we most certainly have Selenium scripts using Python’s unittest module, Java’s JUnit and Ruby’s unit::test module in one bundle. Which means there is now very little reason for developers to not only produce unit tests for there application / business logic, but Selenium ones for the web / presentation tier as well.

For those wanting the Jython code which will initialize and execute a JRuby script, here it is.

import javax.script.ScriptContext
import javax.script.ScriptEngine
import javax.script.ScriptEngineManager
import javax.script.ScriptException

m = javax.script.ScriptEngineManager()
r = m.getEngineByName("jruby")
rs = open("c:\\\\temp\\\\my_selenium_file.rb", "r")
y = r.eval(rs.read())
rs.close()

This is of course, a very simple example, but it works. Oh the power of the letter J. Python code executed in a Java interpretor which then creates an instance of the Ruby interpretor and runs Selenium commands written in Ruby.

Update 12/13/2007: Ruby unit::test scripts should just work with the current jsr-233 code

Posted on December 12, 2007 in Python, Quality by adamNo Comments »

Seth Godin posted last week about how time has a long tail now. The gist of the post is that if you put a url in a marketing campaign, then that url is more or less locked in stone and its existence needs to last longer than the life of the campaign. Well, I suppose you could consider dishing out 404s to customers as being a good idea, but I don’t.

This post worked its way back into my brain while waiting (too long) for the subway just now. If I was in charge of the web testing and / or monitoring of the Reebok website, what could I do to prevent 404s from being part of the Reebok experience?

I think first I would get marketing and operations in the same room and attempt to get agreement that this content is to remain on the server. Once I had that I would add checking for the presence of that url in with each build / site push — in python of course.

import httplib

class MarketingLongTail(object):
    def __init__(self):
        self.h = httplib.HTTPConnection("www.reebok.com")
        self.h.connect()

    def check_code(self, response):
        if response.status != httplib.OK:
            if response.status == httplib.NOT_FOUND:
                return "NOT FOUND!"
            # if you cared about redirects you would handle 301 and 307 here
        else:
            return "okay"

    def Terry_Tate(self):        
        self.h.request("GET", "/terrytate")
        print "Terry Tate campaign is: %s" % self.check_code(self.h.getresponse())

    # implement other campaigns in much the same way as the terry tate one
        
    def __del__(self):
        self.h.close()

if __name__ == '__main__':
    mlt = MarketingLongTail()
    mlt.Terry_Tate()
    # and of course you call your campaign here

This script could easily be modified to work with urls for any organization. And new urls / campaigns can easily be added.

It could also be pretty easily modified to handle the situation where the urls are discontinued and people are redirected to the main site or perhaps a page saying ‘This campaign has ended. Here is a list of our current ones’ by a couple more lines of logic in the check_code method.

Posted on November 14, 2007 in Python, Quality by adam1 Comment »

As part of a generification of some of my core metaframework code I found myself wanting to do a bit of introspection on the contents of the modules I was importing in as tests. Somehow I ended up running across the pyclbr (Python Class Browser) module which can return to me a list of the classes contained in a document. (I can then check each class to see if unittest.TestCase is it’s superclass and treat it accordingly).

Because nothing is as easy as it could (should?) be, the pyclbr module doesn’t work in Jython which is what I write this sort of thing in. Turns out that it is smart (?) enough to check whether the file it is to look at is flat text (good) or some other format (not good). Because Jython runs inside the Java VM it deals with .class files which are certainly not string parsable. Consquently, pyclbr.read_module() would always return an empty dictionary (and no helpful debugging messaging, btw).

Flash forward a bit and I’ve submitted a patch to Jython which will restore functionality to this cool little function.

I’ve got a small, but questionable track record at getting patches applied (CPython: 0/1, Selenium IDE: 1/2, Jython 0?/1) so I’m putting the diff below the cut.

(more…)

Posted on October 24, 2007 in Python, Quality by adam2 Comments »

One of the things I try to tell my students is that if they want to give themselves as great a shot at long term career success in testing they should pick up a scripting language. (I then suggest they nag the school to offer my Python course again ;)). This is usually met with cries of disbelief and the ever present ‘Why?’. Well, here is a quick list I came up with to answer that exact point.

You can use a scripting language to

  • Create oracles
  • Create test data
  • Monitor log files
  • Replace (overpriced) tools (like QTP)
  • Check databases
  • Seed databases
  • Quickly check algorithms
  • ‘grep’ / static analysis

I’m sure there are a tonne more, but as mentioned this was just a quick list.

Posted on September 12, 2007 in Python, Quality by adamNo Comments »

With web 2.0 components starting to make their way into enterprise applications, the need to test tagging engines is starting to become more common place. Part of this testing is going to likely involve some sort of limits testing. So what if your limit is 50? 100? 1000? Are you going to type each of these in by hand? Not likely.

Toolsmith skills to the rescue. Here is a script which will generate a configurable amount of random length gibberish tags.

import string, random

# how many words (min)
MIN_WORDS = 1
# how many word (max)
MAX_WORDS = 1000
# how many letters (min)
MIN_LETTERS = 1
# hor many letters (max)
MAX_LETTERS = 15

wordlist = []
for x in xrange(MIN_WORDS, MAX_WORDS):
    length = random.randint(MIN_LETTERS, MAX_LETTERS)
    word = []
    for l in xrange(0, length):
        word.append(random.choice(string.ascii_letters))
    wordlist.append("".join(word))

print ",".join(wordlist)

Now clearly, it could be improved. Two things off the top of my head would be

  • to choose random real words from a scrabble dictionary (there are a couple available online if you look)
  • if your tags are associated with things like images or blog posts, then you could generate the list which you randomly select from off of the existing tags for full contextual relevance

Like most things produced to aid testing, it doesn’t have to look nice, just be effective. And it was.

Posted on September 3, 2007 in Python, Quality, Video by adamNo Comments »

Cedric Beust is the creator of Test NG which he wrote in response to his experiences using JUnit. He also is now a Googler which shows that a different way to work for them is invent something cool, useful and scalable.

When I first read the Test NG site, I didn’t see what was so cool that you would want to migrate away from the tried-and-true JUnit, but after this I think it is certainly worth exploring if you are running Java 5 or greater. I think if you are using Java 4 still that it would be worth just sticking with Junit as he admits that it is kinda sketchy in that environment.

There was a lot of stuff that would be of interest to developers (or testers who have a more java-centric slant than I have), but as an influencer, I found the following interesting.

  • JUnit’s design forces test independence (by re-instantiating the test class each run
  • This forces certain design consideration
  • Are you testing or are you junit testing your code
  • Annotations are cool – closest thing we have in Python 2.x is Decorators but Annotations are coming in Python 3.
  • Only available in Java — so in theory one could use this with jython. Very interesting
  • There is a built-in junit mode for ease of conversion
  • Cedric has a book coming out – Next Generation Java Testing: TestNG and Advanced Concepts
  • There is work on a distributed Test NG where slave machines take independent tests and collate the results – very metaframework-ish



Direct link here.

Next Page »