Posted on November 27, 2007 in Quality by adamNo Comments »

Michael Hunter’s most recent 5 Questions With column with yours truly. See 5 questions with Adam Goucher for the full interview where I get on my soapbox for a bit and provide a bit of a peek into how I view the world.

Posted on November 27, 2007 in Quality by adamNo Comments »

Building software, and by building I mean the actual compilation aspect, is not that complicated. A couple commands and out pops a .class or .exe. When you want to automatically grab it out of version control, build, package, deploy and test it it can get a little hairy.

We’ve recently resurrected one of these systems which has ant at it’s core. It’s certainly not efficient with tonnes of environment assumptions and no safety nets to speak of.

Here is the first of those safety nets; a little batch script which wraps the ant task that launches the whole shebang and

  • ensures that changes to the environment stay within the confines of the script
  • checks that you have java in your path
  • that the java version is the appropriate one
  • that you have ANT_HOME set
  • sets the classpath appropriately so the listener works

What it doesn’t do is always clean up after itself if something goes wrong, but I hope to catch and handle those inside the actual ant script.

Anyhow, after the cut is the actual script because it is rather large and going to bump things down artificially.
(more…)

Posted on November 24, 2007 in Quality by adamNo Comments »

I’ve talked about FindBugs a couple times; here and here. What I haven’t talked about is how you remove “bugs” that it finds that are not relevant in your context. So I’ll do that not — from within Maven 2 for extra points.

Tuning FindBugs is done thought a specially formatted xml file (root element FindBugsFilter). This page of the FindBugs explains it in pretty good detail. Of all the methods now, I prefer the following:

<!-- A method with an open stream false positive. -->
<Match>
    <Class name="com.foobar.MyClass" />
    <Method name="writeDataToFile" />
    <Bug pattern="OS_OPEN_STREAM" />
</Match>

The upside of this is that it is very narrow. The last thing you want to do is accidently tune out a bug that someone creates later in the release that is relevant. The downside is that creating this file might take a bit of time in a large, legacy project.

We’re using the FindBugs Maven Plugin to integrate FindBugs with Maven. I had originally thought of putting the exclude file into ${basedir}/src/test/static/resources, but everything there gets copied during the test goal and put into the classpath which we don’t need. Instead, I added the maven-ish

<excludeFilterFile>${basedir}/src/test/static/findbugs.exclude</excludeFilterFile>

to my plugin’s configuration. As I tune the other static analysis tools, I’ll be putting their exclude files in the same dir.

One final thing about tuning out results. Even though this tuned out thing at the specific method level, the possibility of accidently tuning out something important increases with the size of the method. (Which is a good argument for keeping methods small). I would recommend that you comment out the excludeFilterFile element every couple weeks for a bit just to be sure the net hasn’t been cast too wide.

Posted on November 22, 2007 in Quality by adamNo Comments »

GLSEC 2007 was my first appearance as a speaker at an industry event (I presented at an early DemoCamp, but don’t count that in the same category).

I figured a regional conference run by people I knew was a safe forum to start building out the larger Brand Of Adam. My talk was in the developer track and not the testing one which was the correct one given who was in attendance. There were about 25 people in the room and I had 2 people come up to me afterwards and say they took lots of notes and think it will help them solve some of their problems. So even though I had a major case of the butterflies I think the presentation went okay all things considered.

The key thing I wanted to drive home was that both i18n and l10n are not linguistic problems. They are just technical ones. Leave the linguistic issues to the translators.

Here are the slides I used (two weeks later than I had planned).

Posted on November 21, 2007 in Quality by adamNo Comments »

When we are testing a web application, we are always trying to keep in mind the goals of the site. Information? Ad revenue? Some sort of service? One factor in achieving those goals is the site’s layout.

In Scientific Web Design: 23 Actionable Lessons from Eye-Tracking Studies we are given the following list (taken from Seth Godin‘s summary) of things to keep in mind when evaluating layout and design decisions.

  • Ads in the top and left portions of a page will receive the most eye fixation.
  • Ads placed next to the best content are seen more often.
  • Bigger images get more attention.
  • Clean, clear faces in images attract more eye fixation.
  • Fancy formatting and fonts are ignored.
  • Formatting can draw attention.
  • Headings draw the eye.
  • Initial eye movement focuses on the upper left corner of the page.
  • Large blocks of text are avoided.
  • Lists hold reader attention longer.
  • Navigation tools work better when placed at the top of the page.
  • One-column formats perform better in eye-fixation than multi-column formats.
  • People generally scan lower portions of the page.
  • Readers ignore banners.
  • Shorter paragraphs perform better than long ones.
  • Show numbers as numerals.
  • Text ads were viewed mostly intently of all types tested.
  • Text attracts attention before graphics.
  • Type size influences viewing behavior.
  • Users initially look at the top left and upper portion of the page before moving down and to the right.
  • Users only look at a sub headline if it interests them.
  • Users spend a lot of time looking at buttons and menus.
  • White space is good.

While not directly related, I’ll link back to Test Design Pitfalls and Aesthetic Science: Understanding Preferences for Color and Spatial Composition for more things to consider.

Posted on November 21, 2007 in Quality by adamNo Comments »

Three things of note from the November 15, 2007 issue of SDTimes.

  • SAFECode (Software Assurance Forum for Excellence in Code) – A vehicle for communicating best practices. (Insert standard ‘best practices’ disclaimer).
  • Forrester has a new survey out called ‘Problem Resolution Survey Results and Analysis’ (anyone want to send me a copy?)
    • Almost half of the respondents require more than an hour to document a problem, and a problem report uses six types of media on average
    • On average, a problem takes six days or more to resolve, and one in four of the problems reported by a QA or test group are returned as irreproducible
    • One solution is test automation, because automation solutions record everything that happened in the environment during a test. This could slash the time needed to gather information. “80 [percent] to 90 percent of people are still hitting the keyboard” to gather data…
  • Empirix Hammer G5 – VOIP functional and load testing
Posted on November 14, 2007 in Python, Quality by adam1 Comment »

As part of a generification of some of my core metaframework code I found myself wanting to do a bit of introspection on the contents of the modules I was importing in as tests. Somehow I ended up running across the pyclbr (Python Class Browser) module which can return to me a list of the classes contained in a document. (I can then check each class to see if unittest.TestCase is it’s superclass and treat it accordingly).

Because nothing is as easy as it could (should?) be, the pyclbr module doesn’t work in Jython which is what I write this sort of thing in. Turns out that it is smart (?) enough to check whether the file it is to look at is flat text (good) or some other format (not good). Because Jython runs inside the Java VM it deals with .class files which are certainly not string parsable. Consquently, pyclbr.read_module() would always return an empty dictionary (and no helpful debugging messaging, btw).

Flash forward a bit and I’ve submitted a patch to Jython which will restore functionality to this cool little function.

I’ve got a small, but questionable track record at getting patches applied (CPython: 0/1, Selenium IDE: 1/2, Jython 0?/1) so I’m putting the diff below the cut.

(more…)

Posted on November 11, 2007 in Quality by adamNo Comments »

Thursday was the program day of for GLSEC 2007. While not really live blogging, I did take notes.

Before we get to the session notes proper, here are the general ones around the conference itself.

  • For a regional conference, it was really well organized and had high caliber speakers
  • The reception with the students afterwards was pretty sparsely attended by students; you would think they would want to make local industry contacts
  • While I don’t have access to the official statistics, I counted under women in attendance at the morning keynote that were not presenters or organizers and saw around a dozen. That is horribly squeued for a conference with ~ 140 attendees.
  • Unlike DemoCamp or CAST there were very few laptops in attendance. At one track session only two other people than myself had one open. Of course, there was a lack of power available, but…
  • I sometimes feel bad when I’m teaching and I say ‘the application blew up’, but I heard that expression a lot so I resolve to not feel guilty for using it anymore.

Opening Keynote – Craftsmanship and Ethics

  • Robert Martin
  • Software development is a craft; not yet a profession — but we’re getting close
  • Disciplines that will make us professional (he talked about each)
    • Short Iteration
    • Don’t wait for definition
    • Abstract away volatility
      • Separate the parts of the system that change from that don’t change
    • Commission > Ommission
      • Do something rather than do nothing
      • Even if you do something the business doesn’t want, you have learned something important
    • Decouple from others (mocks, simulators)
    • Never be Blocked
    • Avoid turgid viscous architectures
    • Incremental Improvements
      • Boy Scout rule: every time you check-in a module in it is in a slightly better state
    • No grand redesigns
    • Progressive widening
      • Add the feature through the stack, not ‘the entire database’ vs ‘the entire presentation layer’
    • Progressive Deepening
      • You do not have to follow the architecture when trying to get a test to pass. you can then fit it back into the architecture later
    • Don’t write bad code
      • why does something that slows you down make you go fast?
    • Go Fast. Go Well.
    • Clean Code
    • TDD
      • No production code until you write a failing unit test
      • You are not allowed to write more of a unit test other than to get it to fail
      • ou are not allowed to write more production code other than to get the test to pass
      • 30s – 1m cycle around the loop
      • Great TDD methodology metaphor is a traffic circle
      • How much debugging does a TDD developer do? If you introduced a bug, it was only a minute ago so no context shift
    • QA Should find nothing
    • 100% Code Coverage
      • Aim for 100% coverage
      • Well, not really 100…
    • Avoid debugging
    • Manual test scripts are immoral
      • Any tests that could be scriptable should be
    • Definition of Done
    • Test through the right interface
      • Decouple the business rules tests from the gui
      • That way only one set of tests break, not a massive failure
      • Tests are written by humans; humans have intent
    • And a couple others he didn’t have time for

People, Monkeys and Models

  • Ben Simo
  • Had a great sign saying: Be Careful – This machine has no brain of its own
  • Should data driven testing really be called data reading testing as it is reading data in from an external source?
  • The woodpeckers in Wienberg’s Second Law are they because they pound on stuff. Duh. But I only just got it.
  • Trained Monkeys…
    • Randomly generate input based on displayed input options
    • Monitor for major malfunctions
      • If you log everything, too much data
      • If you log only the error, you don’t get what led up to them
  • Automation’s usefulness is often only a short term benefit
  • Model based testing
    • Takes automation out of just execution into the design as well

How to build a world-class QA team

  • Iris Trout
  • The role of managers / leads is to run defense for your team. Gotta love being right; this has been my philosophy for a number of years and is part of my QA 101 course
  • In large organizations, a Quality Center of Excellence is a good idea. Another thing I got right. I had hoped to help lead HP down this route before I left. Not sure how they are doing towards that…
  • When joining an organization, phase in changes to QA. QA and the general commitment to quality is a long term arrangement.
  • Hire the right people for the job. Automators for automation for instance.

Avoid the Unexpected: Identifying Project Risks

  • Louise Tamres
  • The goal is of course to try not to be surprised during the project
  • Risk has 3 audiences
    • Project managers who care about risk mitigation
    • Developers who need to think of plan b
    • Testers who need to know where to focus their testing
  • Risk is the ‘heartburn factor’
  • Ranking is based on rational, non-arbitrary criteria. This makes magic numbers hard. Not impossible, but hard.
  • When prioritizing, ask yourself:
    • Is test x more important than test y? Why?
    • what is important to the customer? And how do we know that?
    • what must be demonstrated to a customer (new or existing)? And when?
    • Does this risk affect whether we can we sell it?
  • Ask the developers what worries them about a feature. And since they will lie, search the code for the notes they left for themselves.
  • An important thing about risks, is that their relative importance must have consensus among the stake holders. Not the people in the risk meeting, but the stake holders. Those can be two very separate groups of people.

Lessons Learned from Performance Testing

  • Ben Simo (again)
  • Don’t script too much too soon – applications change
  • Bad assumptions waste time and effort – so ask more questions
  • Get to know the people in your ‘neighborhood’
  • Know your test environment
  • Data randomization might not be good – as it can make result comparison and investigation (more) difficult
  • Different processes have different impacts – while 80% of the usage is often in the 20% of the processes, 80% of the performance issues might not be as well
  • Modularize your scripts – think about load, not user workflow (if possible)
  • Think about code error detection and handling
  • It is likely a software problem – so throwing hardware at the problem is a hack at best
  • Result summaries can mislead
    • Summaries get summarized
    • A positively skewed distribution can give you acceptable numbers, but really is out of whack with the desired profile when summarized
    • So look at the distribution to know which number is the one you care about
  • Ask the developers to add transaction counters to get performance history of the app as testers do their regular testing

I then floated in and out of the afternoon sessions and didn’t really take any notes other than HDD (Hope Driven Development) which is what came before TDD. I did however attend the closing keynote.

Closing Keynote – Beautiful Software

  • Patrick Foley
  • Started with a picture of a waterfall, which ‘could be beautiful’ — to a round of chuckles.
  • What makes software beautiful?
    • It has to work
    • It has to look good
    • It has to have a great user experience
  • Software is ultimately about accomplishing goals
  • We’ve managed to get testers (more or less) as first class citizens in the development process. The next step is to then get the designers into the mix as well.
  • Software is hard. Beautiful software is harder
  • Steps to Beauty – Developers
    • Separate the UI from the ‘purely functional code’ which you need to do in order to properly get the designers involved.
    • Consider the user experience in the overall experience
    • Treat the user experience as a top-line requirement
    • Get help (from the professionals)
  • Steps to Beauty – Designers
    • Treat designs as actual software assets in source control
    • Much like developers have to eat their own dogfood, designers have to learn to eat their own champagne
  • Steps to Beauty – Together
    • Get the right people physically together. Again, we’ve done this with development and test, why not design too
    • Consider paired design / development
    • Solve the tools problem for your environment
    • Focus on minimal, ‘skinnable’ UI which is a good way to align the assets between developers and designers
Posted on November 11, 2007 in Quality by adamNo Comments »

I attended the GLSEC pre-conference tutorial on Ruby On Rails on Wednesday. The host was Chad Fowler and based upon the 1 day I think I can recommend anyone take his 4 version of the course. There is a tonne of clue locked up in his brain and he is good at teaching which is a rare combination. I’ve got literally pages of notes, but they likely won’t make too much sense if you were not there, but here are the ones I think will help others who are testing RoR apps.

  • To ‘merge’ all existing migration files into a single one, use
    rake db:schema:dump

    which will dump the database. You would then reset the schema version in the db. This sort of thing was a big problem at Points.

  • Rails (developers) tend to be fast and loose with db rules, so make sure they are properly constrained in the Models or Controllers
  • scripts/console will give you an interactive session (a la the Python interactive console) but with the entire Rails environment loaded so you can play with ActiveRecord directly
  • There doesn’t seem to be a ‘Ruby Lint’ type thing aside from CheckR but its not active and has no real progress it seems. Anyone know of a static analysis tool for Ruby?
  • Raw sql is given to an ActiveRecord object trough the :conditions symbol. So, if you find a #{blah} (like %s in python) in the conditions value you have yourself a SQL Injection issue. There is a proper, safe, way to use :conditions. That is not it.
  • Password hashing for storage in the database belongs in the model
  • Pretty URLs are nicer than Ugly URLs (and easy to do with Routes)
  • In Rails-speak
    • Unit Tests – are for models
    • Functional Tests – are for controllers
    • Integration tests – are multistep tests that involve more than one controller but noone uses them because there is no scaffolding done for you (and there shouldn’t be)
  • ActiveRecord has a number of built-in validations; use them
  • The default rake task is to run all the tests. Yay!
  • Tests don’t actually commit anything into the database as they run inside a transaction which is rolled back. So, if you have 2 records in a table and you run a test that inserts 100 rows, at the end of the test there is only 2 records in the table.
  • If you have to do some really ugly SQL, it should be put in the appropriate model with a nice name which is then referenced by the controller
  • Some built-in helpers
    • Mocks – are really stubs; used for removing calls to things like the technorati ping service
    • Fixtures – help you data drive your tests
  • The components stuff is deprecated, so just delete the directory in new projects to prevent its usage
  • Including the default JS libraries it ridiculously easy, so make sure that the app is only actually importing the ones it needs (in the views/application.rhtml)
  • In Rails 2.0, helpers will become first class unit testing objects. Unit testing them now is possible, but not easy.
  • Sessions should be as dumb as possible in Rails
  • Assume that your session could go away at any time. Can the application handle that? Gracefully?
Posted on November 11, 2007 in Quality, Video by adamNo Comments »

Awhile back I posted a link to all the DefCon 15 videos. I’ve started to go through them now. There is waaaay too many of them, but I’m game to go through them.

Broward Horne talks about click fraud and how simple it is to perpetrate it in Click Fraud Detection with Practical Memetrics. He uses a number of heuristics to detect the potential likelihood of click fraud. One cool thing mentioned is a paper by Google called The Anatomy of Clickbot.A.

Dan Kaminsky, Director of Penetration Testing Services at IOActive talks about exploiting some of the core technologies and assumptions that the internet is based around in Design Reviewing the Web. He also gets points for the best quote in a long while: Design bugs are like zombies; they come back from the dead. Essentially, he uses the Same Origin Policy to hack itself using bugs in Flash that were fixed in Java, etc. back in 1996. Two other notable items

  • A hacker’s career goal should to be the hacker in the room when dumb decisions get made
  • Imagine how much money you could make if you could sell the top link result in google

To solve the second point, he suggests that we are all going to have to have all our content running over verifiable, secure connections.

How To Be A WiFi Ninja is a wonderfully named talk by Matthew L Shuchman (pilgrim) of WarDrivingWorld. Naturally, he had to define what the Ninja Code is, which I’ve reproduced.

  • Determine needs and objectives
  • Never trust the manufacturers limitations
  • Make changes to existing setup(s)
  • Access wifi at extended range and with greater speed

Hopefully it is as obvious to the reader how we can twist this around as a set of steps to do better testing. The key thing that the talk illustrates is how important it is to know the tech you are working with. Sure, having domain knowledge is important, but knowing how it is built is also critical!

Next Page »