Posted on October 19, 2010 in Star West 2010 by adamNo Comments »

As part of my ongoing experiment of going to conferences to increase my visibility and promote my company I attended my first Star West conference this year. So far the experiment isn’t going so well and I lose money on every conference which is apparently not uncommon as the payoff from the speaker circuit takes a couple years to start. Now they tell me…

I had missed the submission deadline for Star West and didn’t want to pay full price for admission so I volunteered to host the Test Automation track for SQE. Insider Secret – Conferences are always looking for people to do this and you will often get comp’ed the admission by doing it. This meant that I didn’t get to choose which sessions to attend and also meant I had to stay in them for the entire duration.

Not being able to leave meant I learned a valuable lesson – don’t judge a session by the first 2 minutes. I tend to give up on sessions too early opting instead to sit in the hallways and socialize with people I only see at conferences.

So here are write-ups on every session from the Star West 2010 Test Automation track. It would have been nice to include slide decks in each from something like slideshare but those cost money to access and certainly are not embeddable.

The thing that most impressed me about my Star experience is the program book they give you. For each session is a page that has the topic blurb at the top and then ruled paper for the rest. This means notes are nice and tidy and in one spot. CAST and other similarly sized conferences should take note of this though I don’t see it scaling to Agile size without massive deforestation.

My experience was however blighted by some of the HP booth staff who I think owe me an apology. And I suspect that as a result I’ll be persona non grata at SQE events for awhile. Which is fine with me for the moment as I don’t think I’ll be doing the conference thing for at least 2011.

Posted on October 19, 2010 in Star West 2010 by adam8 Comments »

So a curious thing happened to me while in the Dialogues session at Star West this year. As mentioned in the write-up, I was in the Getting Started group and by nature of being seated next to the easel, I took the marker to become the Secretary for the group.

I think there were about a dozen people in this particular group and at the risk of sounding overly pompous I was the only person who had succeeded with automation. Which makes sense if you think about it as groups were self organizing — those who didn’t need to talk about getting started went to more advanced / specific topics. I only went to this group as Star West was part of a year-long experiment on going to conferences as marketing. (I’m actually writing this on the plane to STPCON.)

The latter half of the conversation focuses around tools and tooling. And as I remarked on Twitter later, Selenium doesn’t have to do much marketing to keep growing as there is a lot of dissatisfaction in the market around QTP if this sample is any indication.

With full disclosure given that I work on and with Selenium, I gave general consultant-y sounding answers around tools. Some of which made it to The Relationship of Testing Tools to Economics & Freedom. To me, using QTP to test web applications is a darn near unethical waste of company resources (cash). Yes, there are times when you might need to use it. For instance, when I was in the WinRunner world there were plugins for driving terminal emulators and powerbuilder applications and I wouldn’t be shocked if QTP wasn’t the best thing in those categories.

My argument against QTP aside from the cost goes something like this…

  • It’s closed source (so you can’t build your own lightsaber)
  • It doesn’t have a ‘real’ scripting language. Where real means cross-platform and where skills learned in it an be transposed to others as well. VBScript might as well be Vendor Script in automation.
  • It is Windows only — and there are increasingly more and more non-windows consumers of web applications every day. If you have a public web application that doesn’t care about the increasing Mac (or even Linux) market, let me know and I’ll create a clone to target it.

All three of these were brought up during the session and even though I repeatedly said “I could take this over quite easily, but don’t want to” it became somewhat Q & A.

Unbeknownst to me though there was a ‘mole’ [used tounge-in-cheek] in the group who did not enjoy my providing an alternate view to QTP being ‘all that’. This person did two things after the session that significantly changed my Star West experience.

First they wrote a long, and scathing comment on the back of the feedback form about my ‘bashing’ of QTP. Feedback forms are very important to the Star conferences and are used as input for whether speakers are invited back and to use it to comment on another session participant would be an un-necessary blight against the actual session coordinators which is unfair. And frankly cowardly. Whomever you are, you’ve lost my respect.

I’m extrapolating on that the second event is related to the same person, but during the break I was still in the room talking to a few remaining people when two people from HP’s booth approached and introduced themselves. Someone had come up to their booth and was told that I was ‘bashing’ (same word as on the feedback form) on QTP and informed that HP, as a sponsor ‘was not allowed to bash Selenium so I should stop bashing QTP’

Let’s pause for a second for context.

I was not at Star West as a sponsor.

I was not at Star West as a speaker.

I was there as a volunteer.

My badge said ‘Delegate’.

Now, I did get comp’ed entrance into the conference by volunteering as a track host. This meant I didn’t get to pick which sessions I went to as I had to be in all the ones I was hosting — so though no money was exchanged a price was paid. The job of the track host is to help the speaker get their laptop working on the screen, mic’ed up, introduce them and then thank them at the end. And be a gopher if needed. But other than that you are just another session participant. It is not a paid position and the person does not assume any representation of SQE (the organizer of the Star events).

So I, a delegate, was told to be quiet by a vendor/sponsor. Ummm, ya, HP — your money didn’t buy you that. Or shouldn’t have at any rate.

Now, if I was either of those things then the rules would be different. Star prides itself on not having marketing sessions and playing fair-ish on the trade show could be seen as just good etiquette. Remember though, that I was a delegate.

I’m pretty pissed that those two from the HP booth had the nerve to try to censor me. The proper thing they should have done is to ask to talk to me and see if they could try to address my concerns about their product. They’re not going to win me as a convert, but at least they would know what I was saying and the reasons for it. But instead they sent down the goon squad.

And for that, I think I’m owed apology — from HP.

Three final points.

First, when recounting this to Rob Sabourin (whose session I missed as I was doing my volunteer duties) later he suggested that listening to a recording of what I actually said and how I said it might be useful. There are a couple people in the testing community that when get onto passionate subjects can come across stronger than intended. I concede that this might be such a situation. Alas, no such recording exists (to my knowledge).

Volunteers at Agile all wear shirts emblazoned with ‘VOLUNTEER’ on them so noone can accidentally assume they are anything but during their shifts. All conferences, including Star should pickup on this model if not with shirts then buttons or ribbons or something else. I fear that because I was seen in all the automation sessions more was assumed about my status that it really was. (Though ironically my session hosting duties for this particular one were limited to cleaning the room in-between and after the session.)

And finally, a sincere apology to Dot and Mieke on any possible repercussions from the feedback you might have received. And to Lee Copeland (program chair of Star West) who had his morning ruined by having HP track him down so as to have to track me down to have a chat (though I think the right answer should have been to tell HP I was neither a speaker nor sponsor so HP can’t complain).

Posted on October 19, 2010 in Star West 2010 by adamNo Comments »

Alex Imrie’s talk has the distinction of being the one I took the most notes in. In a nutshell, it was about when and how to handle when things blow up in your scripts. And they will. In her intro she said that she “shouldn’t be here” since she comes from a linguistics background. I would argue that makes her more qualified than most to be there; we need an even more diverse set of backgrounds thinking about stuff in test.

  • We want our automation results to be constant
  • Some definitions
    • An ‘event’ is a deviation from plan
    • An ‘exception’ can be handled in the script
    • An ‘error’ can’t be dealt with in the automation
  • I think I have a problem with the chosen terminology since they already mean one thing in a programming context and now we’re overloading them with something else. The distinction and need for a name though is valuable
  • Events can happen from the development process, the test environment or the test platform. Ideally we want them to be in the platform as that is where the interesting things are from a testing perspective.
  • The big solution to these is really better communication (we knew that) but also one can try to build in some manual tester intelligence
  • React to the real issue, not the inherited problem
  • Use state validation before and after an action — which can clutter the script, but is valuable especially things like Selenium which will happily click on a hidden field.
  • But of course that means state hooks were added for you.
  • If you can’t eliminate dependencies, at least know them
  • Setup and teardown are often the most complex parts of automation
  • ‘Local’ event resolution has the workaround in the test itself
  • ‘Global’ event resolution is more like a high level exception handler that logs a message and then bails out of the script
  • Can your teardown work in both happy and sad test situations?
  • Start with a Global handler and add Local ones as needed
  • Continuous Integration + AUtomated Testing = Merciless Transparency
Posted on October 19, 2010 in Star West 2010 by adamNo Comments »

Clint Sprauve did a pretty slick talk at Star West of the format ‘do these things and you will fail’ style that is popular these days. You know you are attending a talk in the US when it starts with a football metaphor; kinda worked, but likely could have gone better with a different audience. Ah well.

  • Unit tests is like only spellchecking for a novel — spell correct but is want to read it?
  • The techniques…
    • Record and Playback only
    • Dependencies between Tests
    • Elaborate Test Frameworks
    • Substantial Duplication of Tests
    • Happy Path only
    • No (or not enough) Unit Tests
    • No (or not enough) GUI Tests
    • Poor readability of test cases
    • Go Agile and force BAs and SMEs into automation
    • Keyword Driven Testing
  • Manual testing didn’t die because we created automated testing
  • A Pragmatic Approach
    • Don’t let tools’ drive your quality efforts
    • Use DSLs (either Cucumber-ish or Keyword)
    • and a couple other things I didn’t catch
Posted on October 19, 2010 in Star West 2010 by adamNo Comments »

Thursday morning at Star West found me in a workshop held by Dorothy Graham and Mieke Gevers. It was actually two workshops split by the morning break. The attendees self-organized into 7 or 8 groups to talk around a particular topic.

For the first part I went to the ‘Getting Started’ group as I certainly have no small number of ideas on how to do it. There was lots of angst about having automation thrust on people and in particular QTP (which then inspired The Relationship of Testing Tools to Economics & Freedom). It also resulted in another unfortunate incident which I’ll document later.

The second part was I think much better than the first. In the second, there was only two groups; one which Mieke participated in directly on building a framework and one on ROI which Dot facilitated. This format lent to greater depth of dialogue and deeper learning. In the future, I think they should do it in this format again.

Each group took notes on chart paper art-show style which were photographed. Once I get a copy of them I’ll update the post.

Posted on October 19, 2010 in Star West 2010 by adamNo Comments »

Dietmar Strasser talked at Star West 2010 about incorporating State into Keyword-driven automation UI tools to limit the amount of keywords presented and reducing use of keywords at the wrong time. It seemed to drift from product demo to cool idea show-and-tell but ended up not really being either as it is just an internal tool they are playing with. My notes are really tool specific and not really relevant to the general audience. A cool experiment, but there are way too many layers I think and can see creating / maintaining within this system being a huge task. His internal usage says otherwise, but then again, he wrote the tool. How many times have I fallen into that trap?

  • Keyword growth is a problem.
  • State-driven Testing = Keyword-driven Testing + (UI) State Management
  • The tool has an XML based DSL to model the state
  • That XML is parsed to control the state stack
  • Who does what?
    • DSL – tester, domain specialist
    • Test Design – tester, domain specialist, product owner
    • Implementation (Java only right now) – developer, automation engineer
Posted on October 19, 2010 in Star West 2010 by adamNo Comments »

Antti Huima loves algorithms and his talk at Star West 2010 reflected this. And that is actually a good thing.

  • If V-Model is constructing a skyscraper, Agile is working on an oil painting (since oil never really dries and is constantly reworked). This is the best metaphor I think I’ve seen on the old vs. new way of doing things.
  • He mentioned crunch time though a couple times when describing agile. If you have a crunch time, you’re not agile.
  • Automated Test Design (ATD) in this context means producing test cases by running some sort of algorithm or program. Such as Pairwise, Finite State Machines or System Model Driven.
  • Its automated since a machine is doing it
  • BUT humans are involved in every step
  • When choosing your ATD strategy, think about
    • How quickly can you update your model and generate new tests?
    • How small an increment can it handle
  • You can use agile for implementation and v-model for test design
  • Rules of thumb around requirements
    • Small changes in requirements must lead to small changes in models
    • Must be able to start with a small model rather than the whole thing immediately
    • No significant delays caused by tooling
    • Support for generating directly executable tests
  • Only model what you need right now
  • Helps TDD since the tests are created for you. I don’t think I believe this one. TDD’s big win is not the suite of regression tests (which are nice) but the evolving design and testability that result of building the tests by hand. Having a machine do them misses the point.
  • You can’t as Deep Blue why it did a [chess] move. The same applied for your generated tests. Which is the big weakness. Go ahead and ask your algorithm ‘why’
  • Models don’t have to be monolithic
Posted on October 19, 2010 in Star West 2010 by adam1 Comment »

Karen Rosengren’s Star West 2010 session was about how to build a test strategy. The big thing for me is that I need to learn about Value Stream Mapping.

  • Identify the problem — capture metrics that shoe you have a problem and the value of solving it. A big thing here is that you are making sure to understand not only that automation can solve a problem but that it should actually be solved.
  • Symptoms are not useful in identification
  • Too often the automation team’s job is to solve your company’s equivalent of world hunger
  • Start from where you are. So where are you?
  • Value Stream Mapping – sit the team down and find out what you are doing now
  • True value is adding when I’m running the tests. The setup etc. is necessary time but not value add.
  • Big wins to be had automating to build deployment and environment configuration. She uses Tivoli (since she works for IBM), but nicely avoiding turning it into a Tivoli sales presentation — which it could have quite easily.
  • The Star audience is largely non-technical so she pointed out you are going to need programmers.
  • Split metrics into two buckets
    • Business Impact – illustrate the business value and help sell projects
    • Operating – how well the implementation is going
  • Automation is not just test execution
  • Rank scripts by time and effort when deciding what to automation