Posted on June 30, 2007 in Quality by adamNo Comments »

Software metrics are one of those topics that come around over and over. Most recently, Irving Reid got the ball rolling the other day with this post. I responded with a couple links of my own through the backchannel which meant I didn’t trackback, so here they are in the open (with the trackback).

  • Michael’s presentation entitled Cem’s paper on Software Engineering Metrics: What Do They
    Measure and How Do We Know?
  • A gem from a mailing list conversation recently; metrics answer the question of ‘where are we now?’ and provide context to the question of ‘where do we want to be?’ (or something like that)
  • Sandy Kemsley blogged yesterday from IQPC BPM Summit and one session had lots of great advice about how to รขโ‚ฌโ€ and how not to รขโ‚ฌโ€ develop metrics that work for your company. Sure, this is in the context of BPM, but likely also applies to how we build and test software
  • Some points on metrics I’ve had from a long thread on metrics on the CMMi mailing list
    • Eliyahu Goldratt, creator of TOC (Theory of Constraints), has some famous sayings:
      “Tell me how you measure me, and I’ll tell you how I will behavior.”
      “Measure me in an illogical way, then don’t complain of my illogical behavior!”
      His point is that policies and measurements are the main drivers behind human behavior. Think about it and you’ll see that’s almost always the case, at work, at home, at church, at school, wherever… Unfortunately, the prevailing practice is to establish local optima rules (do what is best for some part of the system – individuals, departments, functions), which are often in conflict with the global optimum (do what will be best for the overall system – the organization).
      This knowledge must be, thus, used to induce the parts of the system to do what’s best for the system as a whole (system thinking), i.e., to achieve the goal of the system. For that, the goal must be clearly stated and be known to everyone in the organization. Then you must develop policies and measurements that will drive people to take the needed actions in order to achieve the intermediate objectives, which will lead to the goal.
      TOC provides simple, yet powerful, tools and processes to help developing such policies and measurements.
      This is a “natural law”: measurements are tied to behavior and performance of people. You may ignore it at your own risk… As you may try to ignore gravity… ๐Ÿ˜‰
      So we better learn how to use this law for our advantage…
    • So, what does TOC suggest as measurement categories on “What is not done properly”?
      1. Things that should have been done and were not
        • If we didn’t do the things we should have done, our clients (whoever they are) got disappointed with us
        • This is a measure of our RELIABILITY
      2. Things that should not have been done but, nevertheless, were done
        • If we did things that were not needed, we have overloaded the system with junk! We weren’t effective.
        • This is a measure of our EFFECTIVENESS.

      What is the beauty of having only these two measurements categories? First, they don’t overlap! No ambiguity!
      Second, they are clearly focused on the goal of the system: being effective (do what it’s supposed to do) and reliable (we can trust it delivers what it promisses). Isn’t it what CMMI is all about? ๐Ÿ™‚

    • For those interested on knowing more about these measurement “philosophy”, I (not the me “I”, but the original poster’s “I”) recommend the audio-book “Beyond the Goal”, by Eliyahu Goldratt. It’s a set of 8 CDs, accompanied by the respective PPTs. There is a downloadable version (with no PPTs) at www.audible.com . Of course, there is much more being discussed in this book, all of it is very relevant for the IT industry, particularly for ERP customers and suppliers.
    • When metrics are being developed they should not be attached to employee performance, or indicate how an employee is performing etc, but rather how a process is performing or such.
    • It is totally appropriate to use measures to evaluate individual performance. However, I believe that those measures are different than the ones that we are using to manage process execution.
    • It’s been said that “what gets measured gets done.” I believe that “what gets measured gets manipulated” – especially if my future pay (or even job security) rests on those measures! Tell me the numbers I need to get my raise and I’ll make sure you get them, whether they reflect reality or not. This is certainly NOT the kind of measurement capture, analysis, and reporting that we can tolerate when trying to manage process performance.
Posted on June 30, 2007 in Housekeeping by adamNo Comments »

I’ll be re-launching this blog at a new domain sometime over the summer, and part of that re-launch will be branded ‘stuff’. In the meanwhile, you can get the first piece of merchandise from my new cafepress store — it’s even unbranded for now.

Posted on June 28, 2007 in Quality, Video by adamNo Comments »

Guice is a dependency injection framework for Java 5 developed by a couple people at google. From their page…

  • Guice empowers dependency injection.
  • Guice cures tight coupling.
  • Guice enables simpler and faster testing at all levels.
  • Guice reduces boilerplate code.
  • Guice is type safe.
  • Guice externalizes configuration when appropriate.
  • Guice lets you compose your application of components which are truly independent.
  • Guice reports error messages as if they will be read by human beings.
  • Guice is the anti-static.
  • Guice is small and very fast.

Now, I can read Java, and if forced to I can write some simple beans or whatnot to assist my testing of something, but my no means do I call myself Java proficient. So take my notes with a grain of salt, but I my initial impression is that this is darn useful and should be explored more deeply — at least with the projects I need to work on.

  • Using static references comes with baggage; leads to ‘all or nothing’ testing and causes ‘static cling’
  • Guice lets you not use static
  • 3 types of injection Guice uses
    • Field – not really nice as can lead to non-final public fields
    • Method
    • Constructor – Apparently this is the one we want to use most
  • The GC is were things goto die, Guice creates those things
  • Integrated with Strut2 so actions are automagically injected

The slides and the rest of the support material appears to be on their main site, but you should watch the video as they are pretty entertaining, and informative, in it.

Direct link here.

And while we’re talking Guice, there is another video entitled Becoming More Guicy: Guice Intermediate Topics which was even more over my head, but for the people who know java should be pretty straight forward (I gave up half way through).

Posted on June 28, 2007 in Podcasts, Quality by adamNo Comments »

In a mailing list (not sure which) this week someone mentioned that they use ‘distraction’ as a test method sometimes. The theory is that when IT folks are installing their product they are not sitting alone in a corner doing it, but are instead being distracted by the rest of their other work. Seems like a pretty good theory. I accidently applying it to this week’s Technometria with Robert Glushko runs the Center for Document Engineering at UCB as the guest. Here are the choice bits that floated to the top.

  • Document engineering these days is about designing the payload of web services
  • Interoperability in the context of documents is understand a document the way I intended it to be understood
  • Different classes of web service (p2p, p2b, b2b, b2p) require different types of documents
  • Need to undertand and communicate design patterns for documents

Not a lot, but I had not given much thought in how I build (engineer) the documents I create. They tend instead to be organic and grow limbs as I need them to. If you work somewhere where you (individually, or as an organization) are creating new document structures, there is apparently a wealth of information available on this topic. Starting of course with Robert’s book

You can listen to it yourself over here

Posted on June 28, 2007 in Quality by adam1 Comment »

Just some little items that have been kicking around for awile but are not big enough to really warrant their own post.

Posted on June 26, 2007 in Quality by adamNo Comments »

I just received an e-flyer from Bell which hotmail windows live mail marked as a potential phishing spam. This triggered the memory of when I was at Points and testing a new product for American Airlines. At the end of the process, a confirmation email was sent to the users, but it would get flagged by AOL as spam and eventually would get put into the blacklist.

So here is another thing that you need to check on when wearing your QA hat (which is a bigger hat than the Tester one): Make sure that your organization has contacted the major email providers and make sure that your sending addresses are in their whitelist of that they conform to the mail standards in such a way as to not trigger their filters.

The postmaster site provides an overview of the services businesses can make use of when interacting with windows live mail. Specifically, have a look at the Safe list Information and Sender Solutions sections. I’m willing to bet all the other big mail providers offer similar resources as well.

Posted on June 26, 2007 in Quality by adam1 Comment »

TWST3 was the second time I had seen Cem present. The first time I talked about over here and I think that of the two talks, the one he brought this time was the weaker content-wise. Ironically though, I got more notes from it. Before I get to those notes, everyone should go over to his blog and read the most recent post regarding the Principles of Law of Software Contracts. The impact of this is going to be very large with anyone who has customers in the US.

And now the notes

  • There are two types of risk; product and project
  • Bug catalogs can (and do) go out of date
  • Some uses for bug catalogs
    • Test idea generation
    • Test plan auditing
    • Getting unstuck
    • Training new staff
  • Relatively new testers fail when using catalogs (in Cem’s experience of training testers)
  • To audit a test plan
    • Make a list of risks/failures
    • Find each in the test plan
  • heuristic – if a bug can be imagined in more than one place in you product, include it in the list
  • If you don’t know if there is a risk, the risk is your lack of knowledge of the potential risk
  • Mindjet Mindmanager (mindmaps are something I’m becoming more and more interested in)
  • A generative taxonomy a structure to make classifications
  • look for classes of bugs that were missed (fundamental blindness risk)
  • Too oftem people get bogged down in the obsession to categorize risks
  • Taxonomy generation should be iterative
Posted on June 25, 2007 in Quality by adamNo Comments »

Yet another post regarding the scribbles I took during people’s presentations at TWST3. This time on Morven Gentleman’s talk about ‘How much testing is enough.’

  • A fundamental property of risk is that everything can be reduced to a comparable value. How do you compare the two risks of ‘being mentioned in parliament’ and a ‘specific dollar loss’? There is no common comparable value
  • Here’s a heuristic; father knows best
  • The reality of the world is that business risk trumps technical risk
  • Putting risks into bins doesn’t really help (due to the comparable value problem)
  • When doing any sort of risk assessment, you need to have access to all risk holders
  • Risk is dependent on whatever else is going on in the world at the time and by extension is constantly changing
  • The cost of at test based upon it’s risk can be substantially altered depending on where in the sequence of all tests it is conducted
  • Testing based on risk with test data that does not accurately reflect field data is a waste of a test
  • After the initial rush of bugs when a new version of software is released, organizations are lulled into a false sense of confidence because the number of bugs reported decreases (software reliability growth). That does not mean that new bugs are not being found, but that customers have stopped reporting them or are learning to work around those limitations.
  • The original purpose of requirements documents was to be able to pay for partially completed work
  • What is in the budget for a loss based upon a missed risk?
  • Something is better than nothing, but making sufficiently wrong decisions can be worse. Just do the best you can
  • There are two types of positive tests
    1. Tests that give you the same result every time
    2. Tests that give you a different result each time
  • Reliability is a perception of robustness
  • Robustness is how the software behaves when faces with bad things
Posted on June 22, 2007 in Podcasts, Quality by adamNo Comments »

Andrew Keen is making the rounds promoting his new book The Cult of the Amateur whose main treatise appears to be ‘Traditional Media Good, Web 2.0 Bad.’ Regardless of how you react to that statement, he does make a couple noteworthy statements.

  • Web 1.0 was an attempt by established media companies to use the internet as a distribution method. Web 2.0 is an attempt to make new companies based around user generated content
  • The biggest problem with Web 2.0 is the associated culture of anonymity; in the anyone can claim to be anything, in the issue death threats with impunity as that is a police matter
  • Believes there is rampant media illiteracy; that is we as consumers of user generated content do not know how to see through the agenda and conflicts of interest of the various authors and that a lot of what is put on the internet is taken as gospel. Is wikipedia gospel? I know my daughter thinks so, so perhaps there is something to this argument but I don’t think it is as bad as he makes it sound — but then again, having a wishy-washy stance on something is not going to make an interesting book.
  • Why Traditional Media is better than new media
    • They reveal their sources (sometimes begrudgingly)
    • They have a rich history from which to gain understanding of their biases and positions. For example, I won’t touch Now or Eye because I know their political leanings and I think it ruins the paper
  • There used to be a clear distinction between content and advertising, but that is not so clear anymore. His position is that bloggers can’t survive on blogging along so they need to take external money to pay for things like their mortgage. And who has the money? The advertisers of course. Also, the top YouTube videos are often commercially created, not user created.
  • It appears that Kevin Kelly is his intellectual nemesis (he mentions this interview)

You can listen to the podcast here.

Posted on June 21, 2007 in Podcasts, Quality by adamNo Comments »

Sara Ulius-Sabel is Metrics Manager for Whirlpool spoke at Managing Expectations Week about their use of metrics in driving user design that is Useful, Usable and Desirable. Unfortunately when asked about the specific metrics they use she threw out the classic of ‘it’s proprietary, but they are user facing.’ That’s great, but it doesn’t help me at all. Anyhow, here are my notes.

  • Do you compete with yourself? If so, what is the differentiator between brands?
  • Is the mix of Usefulness, Usability and Desirability proportionate? Does it have to me?
  • The way we have been doing things is not necessarily creating the right thing. Do we even know what the user wants? Does the user?
  • More features does not equal better, sometimes the opposite is true
  • When a person/group in the organization wants to add something to the design, ask them why
  • Think holistically about how your product is going to be used
  • Don’t fake knowledge; go to the people who actually have it
  • Good design should be intuitive to the user. Unnoticed in fact until explicitly pointed to it
  • Features and Aesthetics contribute to the desirability
  • Desirability is a moving target and constantly changes. Can you react?
  • You don’t have to be the same thing to all people, but you have to be the right thing for your people
  • Can you tell a story / user history about your customer?
  • Use metrics to answer the question ‘here is where we are now, where do we want to be?’ The answer will drive the decision making process

Listen to it yourself here. Other speakers from the same event have been recorded here.

Next Page »