Posted on January 31, 2007 in Quality by adamNo Comments »

Here is a pack of links I’ve had kicking about for a bit

Posted on January 31, 2007 in Quality by adamNo Comments »

This is the fourth in a series of posts on the generating and displaying source code metrics.

Checkstyle

Checkstyle is a handy little tool for checking if your Java code adheres to the standards of your organization. Don’t have a custom set of standards yet? Fear not because they scan for the Sun Code Conventions which should tide you over until you can tune them to your specific needs / wants.

Don’t care about how your code looks? You should. Not only does uniform code make it easier to ramp up up new developers on how to write code that integrates cleanly into the project as a whole but it also makes sure that the company policies on how to write code are clearly disseminated. In java this would settle the age old ‘where do I put my braces’ question for instance. Every organization will have different policies around these sorts of things so you cannot just fire-and-forget with Checkstyle. Instead you must look at the responses and see if they are valid in your context. For example, you may not care that a package does not have a package documentation file. Checkstyle does though, and will report it though and report it great gusto.

Here is the ant task to incorporate Checkstyle into your build system
(more…)

Posted on January 30, 2007 in Quality by adamNo Comments »

Seth Goding yesterday reposted his Really Bad Powerpoint rant. It’s a pretty good read. Even in a QA career you will find yourself doing presentations be it for internal company audiences on a new process twist or to peers at a conference. It’s worth a read and has me thinking on whether I should apply these suggestions to my teaching slides and then write out almost a mini-book which encapsulates the concepts presented. That way students could almost be prepared for class and participate more. It would certainly go a long way to forcing me to start work on my book at any rate.

Posted on January 30, 2007 in Quality, Video by adamNo Comments »

Jim Whitehead and Sung Kim present Sung’s thesis on Adaptive Bug Prediction By Analyzing Project History in today’s video. This is less of a technical how-to video than it is a showcasing of some research they wanted to expand on in conjunction with Google. In it they expand on the theory that bug incidences happen in clusters and there are ways to guess with decent accuracy where they might appear by just analyzing the checkins to your version control system. This could then trigger some sort of process of flagging a check-in for peer review or something similar.

I have been using this sort of information for testing purposes for awhile, though certainly not to the level of statistical detail they are.

  • The more changes to a file, the more it deserves a more strenuous code review
  • The greater number of higher priority bugs fixed in a file, the riskier the code contained within
  • The more files changed in a particular check-in, the more extensive the fix, therefore the riskier the fix

HP also uses information like this as part of their release criteria out of testing and into production. Their ‘code churn’ charts show what percentage of the project has changed in a certain time period. In order to ship, it needs to be pretty close to 0. The logic of course is that if you have changes to code, you have a less than small chance of regressing. (Any HP people, feel free to correct my memory).

This video overall is less immediately useful than the previous ones, but has some pretty good long term prospects if someone commercializes it’s research findings. Anyone care to integrate it with ant somehow?

Direct link here.

Posted on January 26, 2007 in Quality, Video by adamNo Comments »

I had heard of the term ‘test oracles’ a number of times and had been known to use it, though not very often. Turns out that was a good thing. I had thought it to mean ‘an application which helps you generate test input’ such as the loyalty program number generator I wrote while at Points. In it you would ask for a certain type of program number, and it would happily calculate one. Well, it ends up I wasn’t quite right. Seems that the definition of an oracle is something which tell you whether or not a test passed or failed.

To take my LP example, it could easily be tweaked to become an oracle to verify the application’s checkdigit routines as it is a seperate implementation based upon information provided by the LP. Another example, from today’s video is to do the same query against a database that your application does, only use a different driver; in theory, the results should be the same.

As for the video, Douglas Hoffman (someone I should have known about long before now) explains the various types of oracles and their usage for 90 minutes over at Google. The questions from the audience are a bit muffled, but otherwise a good talk.

For the record, here are the 5 types of oracle:

  1. True
  2. Heuristic
  3. Consistency
  4. Self Rederential
  5. None

Direct link here.

Posted on January 25, 2007 in Quality by adamNo Comments »

One thing that James Bach has is a 31 letter mnemonic that helps him tackle a testing problem. In a previous version of the Rapid Software Testing slides he had it on one page but has now broken it up over a couple. Its actually kinda creepy when you as him what the 3rd C means and he can tell you immediately that it is Compatibility. During this section of his course, he recommends you not use his, but create your own as it will have more meaning to you. So here is mine for the broad categories of testing (in order) that I do when approaching a new testing task.

    Security
    Languages
RequIrements
    Measurement
    Existing functionality

The rationale for that order is this. If there is a security problem, too it is often not an easy fix but means redesigning an entire section of code to be secure. Same with supporting other languages that english (i18n, l10); having been through the process of grafting that functionality onto existing systems I can guarentee that you are going to have to retest everything after it so find problems there first. Once any fundamental security or language support issues have been identified and addressed, I then dive headlong into the requirements (both explicit and implicit). After satisfied that the requirments have been met I move onto the types of tests that measure things, be it performance, stress or load testing. Sure, issues here could mean a design change, but hopefully anything that severe was found during the requirements verification section under the guise of an implicit requirements that the system not be ‘dog slow’. Finally, I look at existing functionality (regression test) as there should be no more changes to the underlying design of the code.

And yes, it did take longer to decide on phrases that spelled a word than the order the testing happens.

Posted on January 25, 2007 in Quality, Video by adamNo Comments »

One of the things I try to do when talking to people about their QA systems and processes is to get them to put more of the testing burden onto the actual developers. This is not because I am lazy, but because bugs that are found before the code even lands into QA’s lap are the cheapest to fix. A large part of the way this is done is through tools like *unit and continuous build servers. Here is a quick little video about 5 of these tools which will help a .NET development team to this. All the tools are opensource and whose functionality will be included in Team System.

Direct link here.

Posted on January 25, 2007 in Quality by adamNo Comments »

This is the third in a series of posts which have the ant targets and javascript to generate and report on source code derived quality metrics.

PMD

PMD is similar to FindBugs in that it produces a list of potential problems, including:

  • Possible bugs – empty try/catch/finally/switch statements
  • Dead code – unused local variables, parameters and private methods
  • Suboptimal code – wasteful String/StringBuffer usage
  • Overcomplicated expressions – unnecessary if statements, for loops that could be while loops
  • Duplicate code – copied/pasted code means copied/pasted bugs

Unlike FindBugs however, it works from the source code, not the byte code. The list of rules it applies can be found here.

Here is the ant task.
(more…)

Posted on January 24, 2007 in Quality, Video by adamNo Comments »

Apparently James’ younger brother Jon beat him to the punch at doing a talk on Exploratory Testing at Google. I find it amazing that 2 siblings ended up as top flight software testers. You hear about family ‘trades’ in the context of firefighting, policing, etc, but rarely this sort of career path. I don’t have a blog link for Jon, but the company he works for is Quardev.

And of course, the standalone link for those reading via RSS and whose reader does not display the video for them.

Posted on January 24, 2007 in Housekeeping by adamNo Comments »

For some reason, some of the content has not being displayed in some RSS readers recently. More specifically, the ant tasks in the Metrics series are not showing up. I checked the feed and they are there, and it also appears in Google Reader. So if you are interested in the tasks themselves, you will have to click to view the the actual article. I’d fix it, but I don’t even know what the problem is.

Oh, and Feedburner now says there are 38 subscribers. A new record! 🙂

Next Page »