Posted on October 28, 2009 in Uncategorized by adam2 Comments »

My session at this year’s Agile Tour Toronto had the catchy title of ‘From Start to Success with Web Automation’. I think it went well and the feedback cards and post-talk hallway conversations seem to back that up. It was recorded and should show up on DZone at some point so you too can hear my umms, ahhs and other verbal stumbles. Normally I would just post by slides and be done with it, but they are rather Lessig-ish and are just pretty pictures without some accompanying text. So here is both.

Elisabeth Hendrickson was in Europe recently and gave a keynote in which she identified a number of Key Practices of Agile Testing. While I wasn’t there, she did list them off on Twitter.

  • Collective Test Ownership
  • Continuous Integration
  • Rehearsed Release
  • Automated Technical (code-level) Tests
  • Test Driven Development
  • Exploratory Testing
  • Automated Business (functional) Tests
  • Acceptance Test Driven Development
    • Of these Key Practices, three of them directly involve automation at the web level: ATDD, ET and Automated Business (functional) Tests. If you want to get really specific, then they also touch on CI and Collective Test Ownership too so clearly we need to be able to succeed in our web automation efforts if we want to succeed in Agile Testing.

      The problem is, that teams rarely come close to succeeding with them. So much so that automating the front-end of applications is considered a flagrant waste of time and effort in some circles. I’ve had success though. And I think the reasons for it are applicable outside of just the projects I have worked on.

      If we are aiming for successful automation, we should look at some of the attributes of a successful script commonly has. In other words, successful automation is…

      • Paranoid – Never ever, ever, ever, ever, ever, ever, ever trust the client. Just because something says something happened does not mean it has. Account information claimed to be updated? Check the database! As soon as you find yourself trusting things, you will get burned. Get burned too publicly too many times and you will find your time to work on automation being removed and / or the results just shrugged off as inaccurate. Perhaps ‘never trust’ is too strong. How about ‘trust, but verify’ instead.
      • Efficient and Effective – A script exists to exercise a single piece of functionality. Just because you test your whole freakin’ app in one script does not mean it is a good idea. True, there will be likely be a huge number of ways in which your script could not complete successfully, but those specific failure modes would be pointedly exercises by some other script. For example, in order to get to the page you want you have to login to the system. Don’t check the whole login process, just take for granted that it will work. Your login scripts will verify the functionality of the login system. Don’t worry about it anywhere else.
      • A Student of History and Linguistics – Automation isn’t new. There have been lots of research and experience generated in the area over the last 10 – 15 years. Sometimes you have to invent a better wheel, but you should know about the design of earlier wheels as well. Do your research and build up a library. Something like xUnit Test Patterns should be available on the bookshelf of anyone doing automation for instance. Also know which language to use when. Don’t fall into the trap of thinking you need to automate in Java just because that is what your application is written in. I, for example, automate in either Ruby or Jython usually; regardless of what the underlying application is written in.
      • Intelligent and Wise – In Dungeons & Dragons, Intelligence is your intellect or smarts and Wisdom is how you apply it to the talk at hand. Your scripts should be strutting about with 19s or higher in both categories. They should be able to interact with their environment to build their own data and even decision trees. They should also include their own oracles to determine whether the right thing happened, at the right time, in the right manner.
      • Modest – I kept switching between ‘modest’ and ‘humble’ for this slide. The concept here is that when your script breaks, it doesn’t try to hide the fact. It says very obviously I couldn’t do what you asked here because of this. When this happens it should clean up the mess that it left behind so your environment is not in a completely untrustworthy state.
      • Automates Checks and Facilitates Testing – Michael Bolton has a meme running right now with the difference between Testing and Checking. The key difference between the two is whether the oracle is automated or not. Automation can include the oracle as a check or it can zoom through your application to a specific page to help facilitate a human to do testing. Don’t confuse the purpose of your script when creating it. Though of course your facilitation script could have a number of checks so the tester knows the environment is, at minimum, ready for their sapience.

      Now that you know what your end scripts will look like, you are ready to start writing stuff. Right? Well, not quite yet. Before we do that we need to know a bit about who is going to be the creators, consumers and maintainers of them. I broadly break these categories into two divisions: those who can code and those who can’t. It is my experience that you have a far greater chance of success if you target the geeks of the organization first.

      Geeks know how to code. They are comfortable inside a text editor and seeing xUnit style automation with all its quirks is not going to phase them in the least. They also know how to Build their own Lightsaber and by applying the DRY principle will abstract out common sequences of commands into helper methods/fixtures. These helper methods will form the basis of your organization’s DSL.

      Non-geeks are not second class citizens in successful automation by any means, they just have a different skill set that they bring to the table. Often this is in the form of Subject Matter Expertise and what they need is a way to using it efficiently and effectively within the framework. This is best achieved through a DSL that abstracts the technical details away from the business details. And because you had the geeks work on automating stuff first you already have the beginnings of that. The non-technical tester doesn’t care that there are 250 steps behind the scenes for building 50 shares of AAPL; they just want to be able to call ‘buy_stock(AAPL, 50)’ and have it magically work.

      Yes, now you can start actually creating an automated script.


      The first step in a successful automated script to to record the basic skeleton in some means. With the Selenium suite, you use the Selenium-IDE extension for Firefox. In the actual talk I did a recording of a search operation on a local WordPress installation in the IDE. Don’t forget to add your verify and assert statements in. If there is no way for the script for fail, it is not testing (or even checking) in my opinion.

      I almost never save a script within Se-IDE. Instead I’ll export it out to a real language and run it from within Se-RC. Yes, even if it is just a ‘simple’ script. ‘Simple’ scripts have a tendency to increase in complexity over time.

      Add Power

      The real power of Automation is unlocked when you have a real language, not just vendor-script. I’ve argued before that Se-IDE is powerful enough to stay in it, but not powerful enough to really get stuff done.

      Here is the script as exported from Se-IDE into Ruby.

      require "selenium"
      require "test/unit"
      class direct_export < Test::Unit::TestCase
        def setup
          @verification_errors = []
          if $selenium
            @selenium = $selenium
            @selenium ="localhost", 4444, "*chrome", "http://change-this-to-the-site-you-are-testing/", 10000);
        def teardown
          @selenium.stop unless $selenium
          assert_equal [], @verification_errors
        def test_direct_export
          @selenium.type "s", "pirate"
              assert_equal "pirates are way better than ninjas", @selenium.get_text("link=pirates are way better than ninjas")
          rescue Test::Unit::AssertionFailedError
              @verification_errors << $!

      But like most things in the open source world (especially the Ruby part of it), the code that the IDE produces isn’t up to the latest coolness. So with a bit of modifications to use the selenium-client gem, we have this code.

      require 'rubygems'
      require "test/unit"
      gem "selenium-client", ">=1.2.15"
      require "selenium/client"
      class CategoriesTest < Test::Unit::TestCase
        def setup
          @verification_errors = []
          @browser = "localhost", 4444, "*firefox", "http://localhost:5500", 10000
        def teardown
          assert_equal [], @verification_errors
        def test_search_for_pirates
          @browser.type "s", "pirate"
          assert_equal "pirates are way better than ninjas", @browser.get_text("link=pirates are way better than ninjas")


      Or at least the potential for power. The first step for adding power is often to data-drive your script. This is the process of abstracting your script to read data from an external source which is something Se-IDE can’t do (easily). In an environment of geeks and non-geeks this is often accomplished through the use of CSV files. The chief advantage of this is that you can add to scenarios without actually having to change the commands that are executed; just the inputs change. Here is the same script modified to be data driven through csv.

      require 'rubygems'
      require "test/unit"
      gem "selenium-client", ">=1.2.15"
      require "selenium/client"
      class CategoriesTest < Test::Unit::TestCase
        def setup
          @verification_errors = []
          @browser = "localhost", 4444, "*firefox", "http://localhost:5500", 10000
        def teardown
          assert_equal [], @verification_errors
        def test_search_by_word
          require 'faster_csv'
          FasterCSV.foreach('data_driven.csv') do |word, link_text|
            @browser.type "s", word
            assert_equal link_text, @browser.get_text("link=#{link_text}")

      More Power!

      We can go even further though. We want scripts that can are Intelligent enough to data-drive themselves. Because we have the full power of a real language at our disposal we can hook into the database directly and let the script do its thing. Of course, in order to do this you need to understand at a very deep level what is going on in your application. That knowledge acquisition is a Good Thing though. The more you understand the system, the more complete your mental model becomes and the better testing and thorough checking you can accomplish.

      Again, same script, but driven from the database.

      require 'rubygems'
      require "test/unit"
      gem "selenium-client", ">=1.2.15"
      require "selenium/client"
      class CategoriesTest < Test::Unit::TestCase
        def setup
          @verification_errors = []
          @browser = "localhost", 4444, "*firefox", "http://localhost:5500", 10000
        def teardown
          assert_equal [], @verification_errors
        def test_search_by_word
          require 'mysql'
            dbh = Mysql.real_connect("localhost", "root", "", "selenium")
            res = dbh.query("select post_title from wp_posts order by rand() limit 1")
            while row = res.fetch_row do
              word = row[0].split(" ")
              @browser.type "s", word[rand(term.size)]
              assert_equal row[0], @browser.get_text("link=#{row[0]}")
            dbh.close if dbh

      OMG! Cosmic Spider-man-esque Power!

      This is where you want to have your scripts function at for true success. Once you are here, you can have your scripts run continuously if you have a smart runner that notices when new scripts are added into the mix.

      One problem this doesn’t solve is Permutation Madness problem. This comes from there being a boatload of browser and OS configurations we need to care about in today’s environment. Selenium Grid is designed to solve this problem. Using a centralized machine you can farm out your script execution to various slave machines which could all have a different configuration.

      This solution doesn’t scale so well though. Suddenly you need a whole farm of machines to run your tests and that will suck of scare resources to make sure they are all patched, etc. Virtualization helps, but you still need to manage the VMs. Companies like Sauce Labs and Browser Mob exist to remove that maintenance burden from you (and other value add stuff too).

      (Had I been thinking I would have run my script in Sauce Labs’ OnDemand cloud but I wasn’t and didn’t know how sketchy the wireless was at the venue.)

      Thus far we have covered what a successful script will look like and a recipe for achieving it. But how do you know when you are at risk of running off the rails? In programming terms, you check for smells. Here are the ones I mentioned in the talk. There are assuredly more.

      • I need to re-record – This hints that you are staying the Record phase to long and have not added things like error handling and the robustification that is possible in a real language. If the application has changes significantly (intentionally) that your script no longer operates at all, throw it out and start with a clean slate. Don’t try and fix the existing one.
      • Number of Steps – Various companies are learning and publishing the optimum length of script for ease of maintainability and readability and they seems to be averaging around the 200 step / action mark. If you have 1000 steps in a script then you really need to examine what it is doing. Odds are it is really a couple scripts that have organically grown as a single one.
      • Automate Everything – The Cult of Automation is alive and well in the Agile community. Somethings should be automated, somethings should not be. Learn the difference.
      • Staying too long at phase of maturity – Similar to the first one; don’t get stuck in Se-IDE, Se-RC or Se-Grid. Just because it was the right level before does not mean it is the same one now: new problems surface, new variations of old problems occur. If your Se-RC scripts take 9 hours to run because they run synchronously, then you likely are (well) overdue for the move to Se-Grid or one of the commercial offerings.
      • Trust – Again, never trust in automation. Verify everything isn’t a lie. Even if it is a well-intentioned one.

      The last thing I talked about was Patterns for Success. I put them last because I didn’t know if I was going to run out of time and figured it was more important to get the Smells (Danger!) in than the Patterns. I didn’t run out of time so the decision ended up being irrelevant.

      • Build a web – Approach your application from multiple angles in order to build a web across it. Just as a spider catches bugs in its net, you automation can catch them it its. (This was one of the key points of Chris McMahon‘s Agile 2009 talk.)
      • Tags – I was taught to organize my scripts by functionality (in Mercury training circa 2000), but the rise of User Stories has people also grouping by User as well. Both systems can work to great success, but there are inherit issues of overlap in these. I’ve been messing with the idea of ‘tagging’ scripts in addition to these structures. This removes the overlap problem as it is actually an important part; scripts are organized on disk by functionality, but tagged with the User(s) they affect. Runners need some modification though. As do Test Management Systems (if you are stuck using them)
      • Metaframeworks – I publicly demonstrated for the first time the Metaframework I am working on in my ‘spare time’ (heh, no wonder it is taking forever). A Metaframework will run and aggregate the results of scripts written in a number of different languages which lets people write them in whatever they are most comfortable. The point is to exercise the application, not your power in dictating the language tests must be in.
      • Sync ‘n Run – See this post for a larger discussion, but essentially, it is ‘check everything into version control so deployment is just a sync operation.
      • Design for Parallelism – It is better to design your scripts initially in such a manner that they can be run (massively) parallel from the get go rather than have to hack it back in later. Things like file and database row contention become issues. The same techniques application developers use to deal with these problems apply just as well to your automation.
      • Data Doesn’t Have to be Real – Input data has rules and as long as it adheres to those rules then you are golden. It doesn’t matter that you cannot pronounce the First Name the script generated, because, well, you don’t have to. It just needs to be accepted, processed and returned correctly by the system.
      • Test Discovery – Mentioned before, but having a runner which can automatically detect a new script added into the available scripts pool is powerful. This means that you never have to turn off your scripts.

      And after a couple clarifying questions, that was it.

Posted on October 28, 2009 in Uncategorized by adamNo Comments »

Declan Whelan is a local-ish Agile Coach who has started to appear on the speaking circuit (which is a Good Thing). One of his talks is ‘Building a Learning Culture on Your Agile Team’ which he has done at least a couple times before Agile Tour landed in Toronto. That practice showed as he was confident in the front of the room and knew his content; unlike my umms and aaahs. His main argument is that high functioning Agile teams are ones that are good at learning. This makes sense with the traditional doctrine of spikes to learn, releasing quickly, etc. While learning isn’t the only attribute that is important, it is I think an important one.

  • Learning is the bottleneck on teamsJB Rainsberger
  • We need to find effective ways to respond to change. That’s learning.
  • There is no short circuiting the learning process
  • If we get really good at learning, we can reduce the learning cycles. And thus deliver value quicker.
  • Learning and Long Term Value are two sides of the same thing. Think Yin and Yang.
  • Need to make the environment safe for learning to happen
  • Book – Overcoming the Five Dysfunctions of a Team
  • Learning is the acquisition of knowledge and the continuous practice of it
  • Book – The Fifth Discipline
  • We learn in three different ways: auditory, kinesthetic, visual
  • Visual learning needs to be an actual image; just ‘seeing’ words is not enough
  • Book – Pragmatic Thinking and Learning: Refactor Your Wetware
  • Shu Ha Ri
    • Wow, I am sick of this. Though in most cases it comes of as ‘Hey! a cool idea from Japan! Let’s take it!’, but it made sense here. I’m still sick of it though…
    • The way you learn and the way you coach are different at each level
    • The expert may not be the best teacher
    • The Curse of Knowledge
  • In the beginner’s mind there are many possibilities, but in the expert’s there are few. – Shunryu Suzuki
  • The opportunity for learning is deeper in the beginner than in the expert
  • Paper = Promiscuous Pairing and Beginner’s Mind: Embrace Inexperience
  • Stuff from the Fifth Discipline
    • Personal Mastery – know yourself, biases, models
    • Mental Models – be able to articulate and put up for critique
    • Shared Vision – goal and purpose
    • Team Learning – align learning around the bottleneck in the system
    • Systems Thinking – end-to-end thinking is better than thinking in terms of silos
  • Tinkering School – go watch the video.
  • Agile leadership is not about directing, but learning it in the right direction
  • Learning need…
    • to be intentional
    • an infrastructure
    • is incremental
    • safe
  • The single most important thing in Agile? Honest retrospectives
  • James Shore’s Etudes
  • Book – Fearless Change
  • Tribes Learning Communities – “After years of “fix-it” programs focused on reducing student violence, conflict, drug and alcohol, absenteeism, poor achievement, etc., educators and parents now agree, creating a positive school or classroom environment is the most effective way to improve behavior and learning”
  • Functional teams are like a nurturing family
  • Our tendency is to be interested in something that is growing in the garden, not in the bare soil itself. But if you want to have a good harvest, the most important thing is to make the soil rich and cultivate it well. – Shunryu Suzuki
Posted on October 27, 2009 in Uncategorized by adamNo Comments »
Posted on October 27, 2009 in Uncategorized by adamNo Comments »

I spent a bit of time today looking into the redistribution requirements for things that ship with Visual Studio for someone today. (Part of testing is of course to see that we stay in compliance with the various licenses of tools we use to produce software after all.) Here is the verdict.

According the the Microsoft Software License Terms Microsoft Visual Studio 2008 Standard Edition section 3ci you can distribute:

REDIST.TXT Files. You may copy and distribute the object code form of code listed in REDIST.TXT files, plus any files listed on the REDIST list located at:

I don’t have Visual Studio installed on my machine, but John D. Cook sent me the copy of REDIST.TXT from his install which I have put online to view here.

I last had to look up this sort of thing 5 or 6 years ago and seem to recall hoops of extra licensing being required. It’s nice to see that no longer being the case.

Posted on October 22, 2009 in Uncategorized by adamNo Comments »

Stelios Pantazopoulos delivered a nice presentation on useful metrics for project health. Because of the health idea, his metaphor was a patient in a hospital and the graphs and metrics were the output of the machines. It worked really well, but it almost seems to have the starting idea that your project is in the hospital. While not as grounded in science, I think a better idea would be to use some sort of holistic, preventative medicine one. The same things can be tracked, but the perspective and positioning changes completely.

  • Has a chapter on the subject in The Thoughtworks Anthology
  • Metrics need to show how and why a project is on track
  • What you want is simple, quantitative, near real-time metrics
  • PMBOK (apparently) has 4 ‘levers’ which affect a project. I of course didn’t write them down…
  • He adds a 5th: Team
  • The goal of vital sign checks is to bring visibility to all 5 levers
  • The Vital Signs
    • Scope Build Up
    • Current State of Delivery
    • Budget Burn Down
    • Delivery Quality
    • Team Dynamics
  • Make all these public! The last place you want stats like this is buried in a status report. Transparency builds shared team responsibility and trust
  • These metrics (once the framework is in place) should only take 2 – 3 hours per week

Edit: The PMBOK levers are: Budget, Scope, Schedule, Quality. Thanks to Michael Glenn who apparently took better notes than me.

Posted on October 21, 2009 in Uncategorized by adam1 Comment »

Thanou Thirakul did a nice experience report on a project he worked on for awhile which was having trouble continuing to use some of the Agile toolkit on. Specifically, their automated build and test infrastructure was failing them.

That was actually the illness, the symptoms, and more importantly how they addressed them were as follows:

  • Team lost faith in the build – The team, having lost faith in the build system would just ignore the results and even more damaging, would sometimes just skirt around it entirely. To address this they formed a specific team to restore faith in the build by tackling the other problems. One thing they did intentionally was to add people not currently in the project team to help figure out the solutions. This meant fresh eyes and ideas to a problem people might have tunnel vision towards.
  • No one wants to touche the build script – Being a 6 year old java project, their build was a series of cobbled together ant scripts that had grown organically. To fix this they maven-ized their build and got it running inside Hudson. In my experience, this is exactly what you want to be doing, though Ant + Ivy would likely work as well. Sometimes starting over really is the correct idea.
  • Too many false build failures – An analysis of their build failures showed they were largely environment related. To solve this they rebuilt their test machines as virtual machine snapshots. This meant they could quickly return things back to a known clean save removing the problem of environment degradation.
  • Integration tests taking too long – Since their environment was now virtual, they solved part of the length problem by just spawning more VMs to test with. That just moves the problem around though really. Now you need a runner which will distribute your tests. They solved that problem by building their own test distribution server. It was slick!

Here are some other notes I took:

  • Don’t contort your tools; use the right tool
  • Use the length of past test runs to estimate how long your next one will take.
  • When using VMs, for maximum performance assign each its own disk as I/O rapidly become the bottleneck.
  • More times than not, the first (correct) sultion is not to build yeat another tool

My major complaint about this talk was his commentary a couple times that ‘the best part about writing something cool like the test server is that you get to release it as open source’. When I asked where I could grab it I got something that sounded like a sales pitch. Even after talking to him afterward I’m not sure if I would have to part with money to doing something. Just release it! The community will find it and try it out in ways you can’t begin to think about.

Aside from that, it was a well done talk.

Posted on October 21, 2009 in Uncategorized by adamNo Comments »

Scott Ambler delivered the keynote at Agile Tour Toronto 2009. I could have sworn I have met him before, but I’m not sure anymore. He is, umm, unique it seems and I think I would have remembered it clearer. Scott is the Practice Leader Agile Development at IBM and it would seem that his views are, not unsurprising, widely tainted by the size of customer IBM Consulting attracts. I’m also not sure where he fits into the political structure of Agile; he seemed rather anti / bitter / pissed at a number of people in the community at large. But I might be the only one who interpreted it that way and perhaps he was striking his tone to help reinforce his points of the talk.

  • We don’t know what the laws of IT physics are, but they haven’t changed
  • Agile Scaling Model
    1. Core Agile Development – construction focused, small teams with a straight-forward systems
    2. Disciplined Agile Development – full lifecycle from conception to release, retire risks early, self-organization with adult supervision
    3. Agility at Scale – one or more scaling factors are being controlled
  • That stack of requirements have come from somewhere — up front modeling and planning
  • Scaling Factors (All are a scale, not binary. Easy to hard.)
    • Geographic distribution
    • Team size
    • Compliance requirements
    • Domain complexity
    • Organization distribution
    • Technically complex
    • Organizational complexity
    • Enterprise discipline
  • Agile teams are good at delivering high-quality siloed applications
  • Traditional teams are good at delivering low-quality siloed applications
  • Different teams/projects are at different places, so how can you have a repeatable process across all of them.
  • Repeatable results beat repeatable process every time
  • Scott’s theory: Agile scales better than traditional and we are only 5 – 10 years away from having hard evidence to back t up
  • In distributed Agile, don’t use colocated team tools
  • Agile needs to ‘grow up’ – his survey number show less than 50% of teams are working with legacy data. The implication being that they are building new silos
  • In meetings, focus on the value aspect rather than the status
  • If there are technical dependencies, there are functional dependencies
  • Our tools should generate the metrics we need
  • Bureaucracy is bureaucracy. AUtomate the heck out of it to get accurate and timely results
  • Part of failing early is to do the risky parts first
Posted on October 21, 2009 in Uncategorized by adam1 Comment »
Posted on October 14, 2009 in Uncategorized by adam1 Comment »

If we follow along with the theory that all testing is heuristic, then when we are teaching testing we need to teach about heuristics when teaching about testing. I often start with giving the ‘book’ definition:

A fallible means of solving a problem or making a decision, conducive to learning; presumed to usually work but that it might fail)

That usually draws a lot of blanks looks and furrowed brows. So the next tack I try is to call it a rule of thumb and give a couple examples.

  • Never chase after a bus, there is always another one coming. Well, except when there isn’t or the next one is a really long time
  • Father always knows best. I really like this one, but it typically fails in large degrees when my wife is around.
  • I don’t know what got me thinking about it on the walk into the office, but it occurred to me that we all carry around with us a vast array of heuristics we apply to everything we do in life. The trick is to be able to recognize which we are applying in what situation. So that is what I did. Here are my (current) heuristics of getting from Place A to Place B.

    • Always go towards the goal, never backwards
    • Arrive no earlier than 5 minutes before planned arrival
    • Arrive no later than 3 minutes after planned arrival
    • Have a planned arrival time
    • Don’t stop moving. Rather than wait for a light, continue along the street until the next opportunity to cross
    • Observe and understand traffic pattern and flow
    • Know where the closest washroom is. (Don’t ask)
    • Know ish where you are going to
    • Know where you are coming from
    • Have a mental model of the area you are in. (The first thing I buy when visiting a new city is a street map)
    • Back streets have some of the more interesting things to see
    • Know how you would pull the emergency cord if necessary
    • Have landmarks that work. (Buildings in hilly cities like San Francisco don’t work.)
    • Landmarks are relative to your point of reference. (The lake is to the south in Toronto, but the East in Chicago — that confused me to no end)

    As testers, we can likely rattle of similarly long lists for ‘forms’, ‘installers’, ‘windows’ and other archetypal testing situations. But just recording those lists for distribution isn’t necessarily useful or desirable. Like my traveling heuristics above, they develop based on your own biases and experiences. And that is something that cannot be transfered. Even if two people witness an event simultaneously they will experience it differently.

Posted on October 14, 2009 in Uncategorized by adam1 Comment »

The kick-drum of the TDD rhythm is Red, Green, Refactor.

At its core, it is just a tight loop of writing a failed test, writing code to make it pass, then make it better without (silent) breakage due to the test.

On this week’s Writing Excuses they used the term Discover, Decision, Action and it dawned on me that is the rhythym of exploratory testing.

  • Discover – Something happened! How did we notice it? Using our senses and trained intuition to reference our catalog of heuristics
  • Decide – Does our new Discovery matter? Alter what we are doing? Change our tactic choice or course? Lead us further into the rabbit hole or out of the brambles entirely?
  • Action – Once we have come to a Decision (or number of Decisions) we act on one, some or all of them. This action causes us to return to the Discovery state.

And the process continues until we decide to take a ‘Stop Testing’ Action and pop out of the loop.

(Now all I need is a nice graphic showing this cycle and we’re off to the races)

Next Page »