Posted on January 29, 2008 in Python by adamNo Comments »

One of the goals of my metaframework was to let the developers write their own Selenium tests. Since we’re a java shop, this means it has to support JUnit. My boss wanted me to write the whole framework in Java, but well, I like Python better. Realizing that I’m likely not going to convince the entire development team to switch from the dark side to Python, I did it in Jython.

Below the cut is the annotated Jython code for compiling, running and getting the results of a JUnit test.

(more…)

Posted on January 28, 2008 in Python by adam3 Comments »

This last week I’ve been working on my metaframework for selenium (more on that in a different post) which has been pretty fun. It’s been too long since I had to exercise my brain at work. Unfortunately, it has also been pretty frustrating at times. The source of the frustration? Why, 2 bugs in Python’s unittest module — which is included as part of the standard library so in theory should be pretty well tested.

Bug 1 – You cannot load tests from a module (using unittest.loadTestsFromModule) that is not directly inherited from unittest.TestClass. In other words, even though unittest is all nicely divided up into classes, you cannot leverage inheritance to organize your test code. Here is an example:

test_module_A.py

import unittest
class ParentTestClass(unittest.TestCase):
    pass

test_module_B.py

import test_module_A
class ChildTestClass(test_module_A.ParentTestClasss):
    def test_Something(self):
            pass

With the way things are currently, unittest.loadTestsFromModule(test_module_B) won’t find any tests. This is because it is checking the classes in the module to see if they are a subclass of ‘TestCase’. Due to some weird scoping rules, this doesn’t work. It should be unittest.TestCase instead.

Crazy enough, in Jython, this has been fixed in one spot, but not another in unittest. CPython is still affected by this in both places. Here are patches for Jython and CPython (2.5).

Bug 2 could be argued either way whether or not it is a bug but I think it is. In Python there are old-style classes, and new-style ones. New-style classes have been around that in a temporal sense they are no longer new; only in comparison to what was before. All modules in the standard library should be (in my opinon) new-style. Surprisingly, the classes inside unittest are not. This means that you cannot use built-in methods like super() to reach into a class’ superclass.

To me, this is the perfect example why open source apps get a bad rep in corporate environments. At work, when I find a bug I look around to see if there are any other ones in the vicinity. Similarly, when developers are fixing it they look to see if it crops up elsewhere too. And when I verify their fix, I try a couple other suspect places as well. In opensource, the itch gets scratched; the problem is that there might be another itch hiding around the corner.

Update: My CPython patch was rejected, by Guido. Seems I need to go back and re-examine how namespaces work. Oh well. My Jython patch makes things work the way I want them to though, which is the only thing that matters. 🙂

Posted on January 23, 2008 in Quality by adamNo Comments »

Test Cases can be categorized a number of ways. The most common being whether they are positive (check something that should succeed) or negative (a failure in the system is a success in the test). That’s all well and good, but limits your thinking somewhat. I tend to think of things in terms of paths. The notion of a path is nothing new, but I have extended it a bit.

  • Happy Path – These are your positive tests. They provide good data, in the correct order, on the correct screens and will likely allow you to sign off on a good proportion of your requirements.
  • Sad Path – As the name suggests, these are the opposite of happy tests. Your sad tests should cover all the validation and error handling (in all layers of your application). You are going to have to look at the code to know whether you have got all these conditions, though a coverage tool might help. Remember though, that coverage tools will tell you 100% if every line has been touched, not if every line has been touched by all ways.
  • Evil Path – This is somewhat of an extension of sad tests, but they derive from people doing bad things to your application. XSS, SQL Injection, removing JS constraints, etc. fall into this category. Try to think like both the average DefCon presenter as well as a script kiddie. While the attacked might be more sophisticated from the former, the later are far more prevalent.
  • Social Path – Things along this path are more specific and start to move away from looking at the software directly and into the larger system it is a part of. In these tests the attack vector (for lack of a better term now) is the system operator rather than the system itself. What sort of things could you get the operator to give to you that they are not supposed to, and is there a solution (hopefully a technical one, but procedural might have to do) available to address it?
Posted on January 16, 2008 in Quality by adam1 Comment »

The time tested question regarding the right number of people X to people Y question came up yesterday on the Agile Testing mailing list. Here is a variation of what I responded with.

I think the notion of ratios between qa and developers, or any other two categories of employees can largely be debunked as myth since there is way too many factors that are unique to each organization’s structure. Some things I have seen over the last 10 years which have affected the ratios (in this case product managers to developers) are:

  • size of dev pool
  • size of pmo
  • number of products to be managed
  • seniority of developers
  • seniority of pm
  • maturity of products
  • mix of products
  • maturity of market
  • method of organizational growth (organic or acquisition)
  • length of product cycle
  • level of regulation around product
  • whether the pm folks also wear other hats
  • how much babysitting the client requires

I think the more important questions about these sorts of ratios are

  1. what is your current ratio?
  2. is it working?

The problem, or perhaps attribute, of both those questions is that only the person asking them can correctly answer them. Not only are they the one in the correct context at the correct time (there and now), but working is a term loaded with interpretation.

So what is the proper ratio of people X to people Y? It depends.

Posted on January 13, 2008 in Uncategorized by adamNo Comments »

As a coach for house league lacrosse, my association provides me with a skills progression chart (a href=”http://adam.goucher.ca/coaching/sdp/index.html”>2007 version) which indicates what kids are expected to know how to do, and when.

As both someone who has lead test teams and taught testing, I think progression charts for tester knowledge are fantastic. Some companies have these already and use them for job description and promotion purposes, but I think they lose some of their value when used in that way as they become not tools of growth but checkmarks to achieve.

Here is an example progression chart as related to testing.

Test Cases
Junior Intermediate Senior Lead
Test Case Execution I R R R
Test Case Creation I R R R
Test Case Analysis I R R R
Scenario Execution I R R
Scenario Creation I R R
Scenario Analysis I R R
Session Debrief I R
Session Execution I R
Session Creation I R

Legend:
I – Introduced
R – Reviewed

Some notes on implementation

  • This is just a guideline. If you have someone that is capable of moving up the chart, then by all means move them up.
  • The chart works best if you tailor it to the needs of your team; perhaps you don’t do SBT at wihch point those rows become nothing more than a distraction.
  • As the world you operate in changes, update the charts to reflect those changes.
Posted on January 10, 2008 in Quality by adamNo Comments »

I read an article today called Building Credibility: 11 Ways to Show You’re a Professional. The target audience is freelancers, but having credibility is a key issue for testers (and test teams), even if you are full time and not thinking of going independent. (hat tip to James for initially pointing this out to me).

So lets run through them and apply them to testing.

  1. Have an established pricing structure – I might change ‘pricing structure’ to ‘time estimates’. When I was at Points.com I was trying to get my time estimates into reproducible buckets so I could get dragged into a scheduling meeting and have a consistent message.
  2. Create a clean and professional brand – I always try to make the Testing team I am in the most professional one possible. You don’t want any other group in the company second guessing your capability and often those judgment are from what can been seen on the surface.
  3. Pay for a professional telephone service – Not much here except maybe having a central email mailbox which the rest of the organization can contact the test team in a general sense instead of trying to guess the right person to send the inquiry to.
  4. Show Professional Endorsements – In a team context this doesn’t really work, but if any of the team is a member of AST, ASQ or similar, make it known. Especially if they are active in that community.
  5. Proudly display your previous work – Highlight the team’s successes
  6. Proudly display client testimonials and comments – If a customer / client says something great about the Quality of the product, put that up with the previous work
  7. Dress appropriately for client meetings – This is common sense, but don’t forget that you can have internal customers as well. I read somewhere that aiming directly at the level of dress common with the client is not what you should be doing. Instead aim for one notch higher. Seems like good advice.
  8. Always be well-groomed – Again, common sense.
  9. Have lots of detailed information on your website – Wiki; Learn it, Use it, Love it. This is a fantastic place to be putting previous work and client testimonials.
  10. Maintain a confident voice in your industry – Confidence is always important to testers. Or at least the ability to act confident. I have not hired people before based soley on whether or not they would be confident enough to go toe-to-toe with any of the development team. You need to be able to fight advocate for your bugs.
  11. Always be willing to say no – Given our traditional place in the project schedule, all too often we are asked to do the impossible in terms of coverage or speed of delivery. If you say yes then you are setting yourself up for failure and the subsequent loss of credibility in the eyes of the organization. If you say no (and can of course justify it) then you are at least bracing them for the reality that the impossible is not possible when they tell you to do it anyways.

This is a pretty obvious list and I have independently implemented them in one form or another with success, but sometimes the obvious is only that way once it is shown to you.

Posted on January 9, 2008 in Quality by adamNo Comments »

Back when our parents were in school, the curriculum was described in general terms as the 3 R’s; Reading, wRiting, and aRithmetic. It popped into my head today on the way to the train that we in testing also have a group of letters to guide our thinking only our letter is C.

  • Context – Context is king in testing. What might be a bug in one context might not be in another. If you do not know what context you are operating in, do your best to figure it out. You might successfully complete your testing mission not knowing your context, but the odds arof it are much smaller than if you did. Some people, intentionally or otherwise, may imply that context comes into play when doing Exploratory Testing but it affects every type of testing including mind-numbing, brain-off, rote script execution.
  • Consistency – Consistency is another big item a tester needs to be aware of. Pradeep lists a number of different types of consistency to watch for:
    • With image the company or stake holders have been projecting about the product or the company
    • With similar product(s)
    • Within the product
    • With statutes / laws / standards
    • With user expectations
    • With history
  • Correctness – Out of the three items, Correctness is likely the easiest to handle. This is where you verify that the application does what it is supposed to ina proper manner. In other words, classical software testing. But of course you need to know your Context first.
Posted on January 8, 2008 in Quality by adamNo Comments »

I have been in my current job for a year now, which means that it is time for the Annual Review. The Annual Review is one of those management carry-overs from days past that I wish would just go away. I’m still trying to wrangle out of mine.

In theory, the Big Annual Review serves a couple purposes

  • Feedback
  • Goals
  • Promotions

Lets look at these in reverse order.

Promotions – Whether or not someone is getting a promotion and the associated monetary amount is largely already determined before the actual review takes place. It is also the thing people most care about it. Money might not be the only thing motivating someone, but it likely plays more than a bit part in it. Promotions are (typically) awarded based upon merit, so it would make sense to award them upon occurrence rather than wait an artificial time period until their review to grant it. In sports, if someone has earned a place on a higher line, you do not wait until the end of season player review, you do it as soon as possible; likely during the break between periods/quarters.

Goals – So far, I have participated in 6 or 7 Goal planning sessions. As a tester I inevitably end up with some variant of “Assist team in developing high Quality software on budget.” This goal certainly does not pass the SMART test. When I have challenged this goal, I often end up with goals written to a very high degree of precision. “Automate all existing test cases in feature X” for instance. The problem with this is that while it is clear and progress can be measured against it, goals are heuristic. When deciding when to allocate tasks I might have to do I could choose to automate feature X or I could automate feature Y which might have a better reason for automation other than it is written down somewhere that I should do it. Individually, athletes might set a goal for themselves of “Score a Goal” or “Do less harm than good” but as a coach, goals are often specified for you — especially in house leagues. The lacrosse association I coach with has a progression chart for all players regarding which skills they should know when. How we achieve those results is entirely up to us though. As a team we also set goals for the season. I think the key difference between the two is that goals set in the review context are often checkboxes which then get fed into the following year’s review. In sports they serve as motivators and form part of the journey. Guess which one results in the most pride by the person.

Feedback – How many people have received feedback, either positively or negatively, during a review on something that happened 3 months previous? 6 months? 9 months? I would wager a guess that most people have. I know I have. One of the critical things TDD (and CI) have re-introduced to the world is the notion of rapid feedback. If it takes longer than 10 minutes to figure out if the last check-in broke the build then it is too long. If it takes 10 months to tell me that you though my usual quality dropped in April, that too is too long. Feedback to people needs to be just as fast as for builds. See something great, praise it. See something less than stellar, grab the tiller and correct it for it gets completely out of control. Much like the example in Promotions, feedback is given instantly in sports in a couple ways.

  • Break a rule, go to the penalty box / sin bin / whatever
  • Break it too many times and the person is removed from the game
  • Break one of the coaches rules and you might miss a shift or two

All these are feedback methods and are doled out immediately. Most management books (like Behind Closed Doors recommend having weekly meetings with your directly reporting staff. If done properly, the need for big annual reviews likely becomes unnecessary as the feedback cycle has itself become Agile. Again, guess which one is more effective at entrenching desired behavior or removing unwanted; when it occurs, or at some point later down the road.

(This is part of a series of posts on how Coaching and Testing mesh)

Posted on January 3, 2008 in Quality by adamNo Comments »

Hugh MacLeod has recently posted a great set of blog entries on Social Objects. Hugh defines a social object as the reason two people are talking to each other, as opposed to talking to somebody else

While he typically talks about things in the context of product or corporate marketing, I think the notion of social objects is important to the individual as well. Especially to those in testing.

Whether you realize it or not, you are a marketer. The thing you are marketing to the world is yourself.

  • If you are talking to a prospective employer, then your resume is the social object
  • By reading this, you make the blog entry a social object
  • When we interact at an event, the event is the social object

The common thread between these three items is that they all serve as A “hook” to move the conversation along. But since a lot of marketing is … random, it is up to you to recognize that these are all social objects and while you cannot completely control them, you can guide them along the route you desire. If I want someone to see my site, I will add the url to the end of an innocuous email. If I want to assert myself as a one of the top testers in Toronto, I’ll speak at events and get my personal marketing message out to a targeted group of people — or sometimes it will be aimed at a specific person in the audience.

You see these sort of activities all the time in the publishing industry. When Scott Berkun’s book came out he did a book tour, taped a podcast and was videoed doing a talk at Google in addition to hundreds of copies of the book being sent out to reviewers and his normal blogging activities. In all of these, the social object is The Myth of Innovation.

So how does this relate to testing?

The second course to be offered by AST is on bug advocacy. Advocacy is in my mind a synonym for marketing. I realized awhile ago that when I sit in a triage meeting and I want certain things addressed I am marketing on behalf of the issue to get it resolved. In that meeting, the social object is the bug system and the information contained within. As mentioned, bug advocacy is the 2nd course — of an expected 20 to 30 courses which highlights its importance in the eyes of those planning the curriculum.

But what if you have not progressed in your career yet to the point where you get a seat at the triage table? The bugs you uncover are still social objects with a raft of variables that interact. A bug with a priority of “Critical” is likely to be more potent as a social object than one with a “Low” weighting.

Most of the people I know in decision making QA/Test roles have at point or another laid a trap in the bug system specifically to get someone’s attention and have a discussion. Guess what? The trap is a social object. (I would use this technique very sparingly)

Bug systems, and the bugs logged in them, are not the only social objects we testers are exposed to.

Audience
Object
Business Analysts Requirements documents
Marketing Performance testing results
Architects High-level design documents
Developers Static analysis results

As mentioned above, the key thing here is recognizing all these things for what they are (social objects) and exploiting them to full advantage.

Posted on January 1, 2008 in Quality by adamNo Comments »

SDTimesDecember 15, 2007
The only thing of note is Building Security into Source Code which talks to a number of security static analysis vendors who (no surprise) say that you should run their tools during the build.

SDTimesJanuary 1, 2008

  • Agile Principles Are Changing Everything is a great article which looks at the state of Agile in the enterprise. It also has a term I have never seen before. Wagile – using agile techniques in a waterfall manner. Fantastic! This is absolutely being added to my lexicon.
  • What Do Your Metrics Say To You? starts with a great line: “Companies are spending a great deal of energy and money on gathering useless metrics and data around application development projects”. The article is about a study Borland commissioned and amazingly doesn’t promote one of their products to solve the issues they found. Overall it is a pretty critical look at the value of most of the metrics people use.
  • Ironically, a couple pages later was a product release announcement for JetBrains TeamCity which “helps managers get a visual idea of their team’s build metrics”. For those of you who worship at the alter of the green bar, this might be of interest
  • There is a nice comparison of some of the Pros / Cons involved in the WS-* vs. REST debate in How Much REST Do We Need?
  • David S. Linthicum looks at Information Integration Patterns in the context of SOA in his article this issue. Why is this important? “… those who don’t consider the information and just layer services on top of dysfunctional and poorly structured data will end up with a very inefficient architecture” which is something you want to try to avoid when wearing a QA hat (vs. the tester-only hat).

Software Test & PerformanceJanuary 2008

  • Keeping Track of Your Offshore Playbook talks about Quality Audits in the context of monitoring offshore teams as presented through a case study. I think Quality Audits are an oft overlooked tool even with local teams and a place where more experienced QA folks should pay more attention too. An audit is all about gathering information, which is what we do as testers anyways. The final paragraph sums up my thoughts on audits: “… that the entire development organization now realizes the value this kind of audit can bring to the goal of continuous improvement.”
  • Geoff Koch’s Best Practices column is on how “unit testing is destined to remain by and for a modest group of undoubtedly smart specialists.” and paints a pretty damning picture. Unfortunately, it is likely an accurate one.
Next Page »