Posted on April 28, 2006 in Quality by adamNo Comments »

In college they taught us a concept in systems analysis class about how every phase of a product has a customer. I absorbed that like a dutiful student and tucked in the back recesses of my brain. Fast forward a number of years to the present and I listen to a taping of a presentation on Testing where the notion of a Big C customer and Little C customer is raised. When I heard this I had one of those “Wow! Something I learned in school is actually relevant” moments.

(more…)

Posted on April 27, 2006 in Quality by adamNo Comments »

One thing that struck me talking to people after this month’s DemoCamp is the vast chasm of testing effort (or at least testing thought) of an enterprise application and what I will label a “Web 2.0” application.
(more…)

Posted on April 26, 2006 in Books by adamNo Comments »

Managing the Software Process by Watts S. HumphreyChapter 10 – Software Inspections

(more…)

Posted on April 26, 2006 in Housekeeping, Quality by adam2 Comments »

So an apology first off for the semi-butchery of my presentation last night. It was a last minute replacement for another presentation and I think it showed. I’ll do better next time (ooo, ominous foreshadowing). Okay, that aside, here again is why I did the presentation.

  • To showcase that there is innovative work being done at the other end of the spectrum in terms of software design than what is typically demoed. We will never have AJAX, and are only just now growing a web service interface, but it is just as cool.
  • To start to get the Brand of Adam a little more well known. An entry in the wiki page is one thing, having me have your attention at the front of the room for 15 minutes is another entirely. As I mentioned during the presentation, it’s blatant self promotion.
  • To raise the profile of Testing and Quality in the start-up community. In talking to some people at the schmooze portion after the last couple DemoCamps it seems that someone who strictly is interested in Quality and Testing is a novelty almost. If that is the case, leverage me for your testing / quality problems.
  • There have been (I think) 35 presentations so far in the Toronto DemoCamp. Lets say (a pessimistic — realistic?) 3 or 4 of them survive the next 7 years as Select Access has. Odds are you have been assimilated into a larger company; an HP for example. That doesn’t mean you have to stop innovating or producing exciting code.

(Oh, and for those who have no idea what I am talking about, I showcased a tiny portion of Select Access at DemoCamp5 last night)

Posted on April 24, 2006 in Housekeeping by adam1 Comment »

I have in the last month or so started tracking the usage stats of the various pages here via the folks at Performancing. This works great for the individual post pages, but notso great for the ones that are the full entry on the main page. So the question is, if I made every post one which you had to click to read in the main text, how much would that annoy the readership?

Oh, and here is a timely comic from Gaping Void about site analytics.

So depressingly true. The first two sites I rebookmarked after the laptop died were the Performancing stats and the FeedBurner stats.

Posted on April 24, 2006 in Quality by adamNo Comments »

I recently was invited to the second Toronto Workshop on Software Testing which is focused this time around on ‘Test Models’. Although I had to decline the invite due to scheduling conflicts I spent most of the weekend thinking about the test models I employ. I’ve come to the conclusion that I use two: The Onion Model and The Beauty is Skin Deep Model.

As usual, letÒ€ℒs get a definition out of the way. By ‘model’ I mean a series of practices or an approach to testing. Hopefully I am not munging the definition too badly, but like it or lump it, this is the definition this post revolves around.

I think it is best of I give some context to the models I am about to describe by describing an application and explaining how they would affect the testing approach. The application is JDBC proxy which does some pretty fancy SQL manipulation and other magic (this is a real product which has yet to be officially announced so I have to be vague) but is designed to operate transparently to the end user. It is distributed as a separate component to a larger application suite but is to seamlessly integrate with the suite. All configuration is done through a screen that resides in a Java Applet.

The Onion Model
This model is the one I utilize most frequently as I think it provides the highest level of overall Quality and applies to not just the Testing phase of development but overlays both coding and testing — remember, Quality is more than just Testing. It gets it’s name from how an onion looks if you cut it in half; lots of concentric rings each supported by an earlier ring. If there is another, more widely used label for this model, ping me and let me know. In deciding what to test, you make a list of the shared components and work your way out. The theory being that if module A is used by modules B and C, and module B is used by D and E, by testing module A you are also indirectly testing the functionality of B, E and F. Once you have got the kinks worked out of module A you move to module B, then C, etc.. There is also a bit of regression risk mitigation in this model as the bugs you find earlier in the process have a greater chance to be fixed earlier as well which means their effects on dependent modules will be already be in place when you get to them. If you had tested modules R, S and T only to find a bug in module A, you would have to test all the intermediary modules between A and R to make sure their was not a cascaded failure added.

The overlay with development comes with model because developer written unit tests make a fantastic center ring giving Test a decent level of confidence in the code when it arrives at their door.

Applying this model to our sample application we would order our testing as such

  1. ‘SELECT’ parsing – not only is SELECT the most common SQL command, but it is used by other commands and in the case of the sample it was architected smart enough to have all SELECTs handed by the same chunk of code.
  2. all other parsing activities, weighted based upon popularity
  3. ‘magic’ – as defined above
  4. configuration via the GUI
  5. integration within the suite

The Beauty is Skin Deep Model
This model is one I occasionally employ when thinking about testing applications which are either brand new (and therefore need to demo fantastically), are complex enough to be ensured to be run in a pre-production environment by customers or whose sales lead time is large/pipeline is small enough to all but guarantee a code rev between now and deployment. This one gets its name from underlying principle that how the feature/product looks is the thing that matters most. It would of course be nice if it did what it was supposed to as well, but what is truly important is the look and operation from the (potential) customer’s perspective.

Again applying this model to our sample application we would spend 2/3 of our testing effort with the configuration GUI and suite integration and the rest of the time making sure the parsing and magic portions were not completely broken. Partially broken in this case is okay.

Clearly both these models cannot be used in a vacuum. If Product Management and/or Sales needs to be able to demo the product/feature to customers in the near-term but deployment won’t be for awhile, the Beauty is Skin Deep model might be worth exploring, but make sure it is explicitly declared as the approach to testing in the product/feature test plan and there is functionality risk. You could then deploy the Onion model further down the road to capture the functionality problems.

Posted on April 21, 2006 in Quality by adamNo Comments »

While going though some blog backlog on the train, I came across a term that I had not seen yet before. SSoR is the Source System of Record.

Here are some quotes to explain the concept
from Robert McIlree:
By definition, an SSoR is the final authority on the enterprise value of every piece of data so designated to it. Once exceptions to this start being made, the scheme breaks down rapidly into the data value and multiple movement/storage morass that they’re in now.

from Sandy Kemsley:
When data is replicated between systems, the notion of the SSoR, or “golden copy”, of the data is often lost, the most common problem being when the replicated data is updated and never synchronized back to the original source. This is exacerbated by synchronization applications that attempt to update the source but were written by someone who didn’t understand their responsibility in creating what is effectively a heterogeneous two-phase commit — if the update on the SSoR fails, no effective action is taken to either rollback the change to the replicated data or raise a big red flag before anyone starts making further decisions based on either of the data sources. Furthermore, what if two developers each take the same approach against the same SSoR data, replicating it to application-specific databases, updating it, then trying to synchronize the changes back to the source?

So what does this have to do with testing and quality? Every tme data gets moved from one system to another, there is an increased chance of data loss or loss of data syncronization. We can add this to the list of things to look for when testing and a concept to use when fighting for our bugs.

Posted on April 20, 2006 in Quality by adamNo Comments »

So on a lark I contributed to a thread in comp.software.testing knowing I could be quoted in an article. Well, I was. It’s over here.

Posted on April 20, 2006 in Quality by adamNo Comments »

I have maintained for awhile now that in general I think that Agile methodologies may improve the speed in which code is developed, they will have a long-term negative effect on overall software quality. I’m starting to soften my stance a bit on this, but I still have some core concerns about it. The Mary Poppins reference is of course how she got the kids to take their horrible tasting medicine; while the whole Agile thing doesn’t sit well with me, lets call it not as pungent as it was even a couple months ago. What follows is my ‘State of Mind on Agile concepts’ post (something more than one person has asked for).

Requirements
If you haven’t figured it out yet, I firmly believe that strong requirements from product management are required in order for your project to be successful (on time, all features, adequate quality). In waterfall, spiral and their kin, requirements are specified directly some form of a Product Requirements Document (PRD). In Agile methods, requirements generally come in the form of Use Cases/Stories. Now, I don’t pretend to be an expert on use cases, but the ones I have seen online have tended to leave too much wiggle room for developers to interpret things on their own. Interpretation is bad in my books. How is Test supposed to verify that the use cases was successfully implemented as product management envisioned without constantly dragging them away from the other things they have on their plate. So if Use Cases are of sufficiently narrow definition, I can go along with them. (And yes, learning more about Use Case creation is on my to-do list).

Emergent Design
Okay, I know that often the Customer doesn’t always know what they want which is the classic argument against Big Design Up Front (BDUF) and strict requirements regimes in general, but the notion of Emergent Design just scares the socks off of me. I can see this working in smaller project, especially pet ones, but to be making the design up as you go along in a large, enterprise application seems massively risky — and a key role of QA/Test is to remove risk. This especially falls on its head when you have many silos within your application that must talk to each other and the design of each continues to evolve. A large portion of your time is spent making version x of silo a work once more with version y of silo b. Perhaps if a hybrid approach was conceived where the large, over-all design was worked through before coding began, and the component modules were emergent internally (their profile externally would not change) I might buy into some version of this.

Documentation
I don’t have a copy of the Agile Manifesto with me as I’m on the train with no network, but one thing that gets mentioned constantly in Agile discussions is how the Agilistas hate documentation. Documentation is not only a Good Thing, it is a Must Thing. The argument I have seen against it in the context of Test is that the Use Cases are all the test documentation that is necessary. I’m not buying this one regardless of the spin. Use Cases are the requirements to ensure the functionality of the product, but are by no means an exhaustive set of Test Cases. These need to be to be, you guessed it, documented. Another area the Agile group dispatches with documentation is in design documents; generally with the argument that the code is self-documenting and doesn’t lie. While that may be true, the design needs to be written down somewhere anyways to allow others who do not want to wade through thousands of lines of code to understand the design. Like, oh, say the Test group. How a product is designed is one of the inputs into the ‘How do I test it?’ equation.

Continuous Integration / Iterations
Love it. I wish we did more of it. We do nightly builds, but that catches whether the build broke, not whether the product when assembled together does what it is supposed to. Taking 2 days to get the parts working in a manner that Test can proceed each and every iteration is in total violation of the concept of continuous integration. If an iteration is scheduled to be delivered to Test Monday morning at 1035, it should be available then.

Unit Tests
I’m not hot on the notion of ‘create a failing test then write code to make it pass’ unit testing (test driven development). This would seem to encourage ignoring the larger picture and cause micro management of features which have to work together in the macro environment. What I am in favor of is the extensive use of unit testing. If your organization does not have a culture of unit testing, in my mind you have already lost the Quality battle. Oh, and just because you are using *Unit, you are not necessarily doing unit testing. If you are using it to create very specific tests aimed at the code with mock objects and the rest, then you are doing things correctly. If you are using *Unit as a scripting framework from which you run end-to-end functionality test, then you are doing something wrong. Well, maybe not wrong as there is certainly value in it, but don’t call it unit test as that is not what it is.

Geographic Dispersion
With more and more companies having wider dispersed workforces and flexible work schedules, the face-to-face interaction which drives a lot of the Agile methodologies is decreasing. Pair-programming over thousands of km can be done with the right technology (I believe Oracle does this) but this is a huge impediment to implementation. Likewise, Scrum (which to me seems to be the best of the Agile processes these days) requires the 15 minute standup meeting. If the team trickles in over a 4 hour window, when does that meeting get scheduled? How about when they are on opposite sides of the continent? I guarantee someone has thought of this, but I can’t think of a way that would get the same results as having everyone working from some grand master plan somewhere.

Velocity
What a great term. I’m officially working it into my testing vocabulary. The test velocity is the number of tests / number of iterations(or drops — whatever you want to call them). This velocity can tell you an estimation of how much more time you have remaining in your test efforts. And you do have your test cases all recorded so you can do this sort of math, right?

So there you have it. My pros, cons and indifferents surrounding Agile processes as they relate to Quality and Testing. Let the debate begin over how on crack I am today. πŸ™‚

Posted on April 19, 2006 in Quality by adamNo Comments »

The third, and (currently) final Pillar of Testing is Traceability. I lifted this term from the CMMi world, who likely lifted it from somewhere else. Of the 3 pillars, this is the most advanced one, though at first it may not appear so. It can also be the most time intensive one to retro-fit into a process, so it is best to implement this early in a product’s lifecycle.

There are two types of traceability. While both are important, reverse test traceability is warrants more attention as it has the most direct effect on overall Quality.

Reverse Test Traceability
You have achieved reverse test traceability when you can directly link a test case to a requirement (or back downstream in the overall development process). While this may sound conceptually easy to achieve, there are a few gotchas hiding in the woods to trip you up.

  • As soon as you start talking about requirements, you open a whole other can of worms. The root cause of many a project failure is requirements management. To do test tracability it is best to have as specific as possible requirements to give you smaller buckets to put tests into.
  • There are two parts to requirement management; the part that comes from product management and the developed/flushed-out part after they have been in development awhile. The second part is a red herring in the context of test traceability. Test cases need to trace back to what product management wants as they are the little-c customer. If development produces something that is not what product management wanted in the first place and you test the heck out of it, both groups have wasted their time. There is no need to have both team leads lose their head when the development lead’s is sufficient. πŸ™‚
  • What if your product is 6 years old and has had 9 releases and has historically had poor requirements? I would suggest that for now you work on making ‘right now and forever more’ traceable. The next lull in the cycle that arrives work on the previous version etc.

There is hope though for this. It seems that a lot of testing vendors have realized this is an area ripe for exploitation and are starting to include this in their products. I know for instance Mercury had the ability to handle reverse test traceability in Test Director starting in version 7.

Forward Test Traceability
The other type of test traceability is (logically) forward test traceability. This is where test cases are mapped further along the product lifecycle, more specifically, the support phase. As mentioned previously, nothing bugs a customer more than when something that was reported as broken is repaired only to become broken again. Every issue brought to the attention of your support organization should have a matching test case (or complete set of) to ensure it never comes back to bit you again.

Note that what is ‘forward’ and what is ‘reverse’ is context dependent. This being a pillar of testing, the context is for test. If this was a pillar of support or development, the directions and targets would be different.

Benefits of traceability?

  • Structure – every tester is always working against a specific area of the product and that area is easily identified
  • Estimation – by looking at the number of test cases associated with a feature and knowing your test velocity (stealing this term from the Agile kids) you can more accurately estimate your test duration
  • Coverage – you can easily see which requirement(s) are getting disproportionate amount of testing (or lack) and can reallocate resources to improve overall product coverage
  • Reporting – some organization’s release criteria include such things as ‘number of test cases executed’ which can result in a pretty useless number when aggregated. It is much nicer to say ‘For requirement X, we did Y number of test cases’

Oh, and you would make me happy. πŸ™‚

Next Page »