Randy Pausch is a professor at CMU and whose work is and life is going to a legacy few will be able to achieve. This is in the pseudo past tense because he has also been diagnoses with terminal cancer. His last lecture was entitled ‘Really Achieving Your Childhood Dreams’ and if it wasn’t for the context it was being delivered it would have been the best keynote speech I have ever seen (in person, or online).
It is of course, not directly related to testing, but I’ll stretch a definition to the breaking point and say that it is about Quality of life and how to live it to its most. I had done my usual trick of taking notes of the interesting things that might be worth referencing back to, but I’ve decided not to post them. Instead, I’m going to use my 2007 ‘watch this’ card; If you only watch one video I link to in calendar year 2007, this is the one. Be warned though, if you have kids, a spouse, a mentor(s), a mentee(s), there are parts that will rip you apart. But in the end, it is worth it.
(You would think that they would have the screen freeze to a picture of Randy — seems like a bug)
Direct link here and there are standalone version floating around too if you look.
Categories
- Android (2)
- BITW (5)
- Books (21)
- CAST2008 (2)
- GLSEC2008 (14)
- Housekeeping (22)
- Interviews (1)
- javascript (1)
- marketing (1)
- Podcasts (28)
- Process (23)
- Python (19)
- Quality (400)
- Ruby (7)
- s3cr3t (1)
- Selenium (10)
- Star West 2010 (8)
- subversion (1)
- Uncategorized (415)
- Video (54)
- What I Learned Yesterday (3)
Find things by
- Browsing...
- Searching...
Miscellaneous
Risto Kumpulainen’s presentation was one of the more pure-ish ‘experience report’ presentations of this year’s GTAC. I had originally scoffed at the title figuring there was nothing relevant to me in it but I think it is one of the more useful ones for people who are actually tasked with creating automation frameworks.
- Be sure you can answer ‘what problems are we solving with automation?’
- They test a lot of different linux variants and found 20-40% of bugs are platform specific — which is a pretty damning number if you think about it
- Rather than a big ‘write a framework’ project, they used and iterative development model where each iteration was about one month long and had clear value expectations
- The first step Risto did when going down this route was to make all existing scripts return same values for pass / fail
- Different tools connected to the test runner using simple api — thus making it a metaframework
- The payoff for automation was in maintenance releases, not the initial one
- Each release of the automation frameworks had specific targets and actions
- When returning responses, colours count (keep the bar green)
- Hourly builds with system level smoke tests fixed the issue of broken builds for them
- Automation framework is integrate with the version control system
- They have an led message board that displays the build status
- Log build and test resutls to a database for generating statistics on builds
- Controller doesnt know anything of the tests (and vice versa) – this is something I’m struggling with on my current framework
- Moose Test
- Most testing is done on virtual servers (not real hardware)
Direct link is here.
Douglas Sellers was at GTAC representing CustomInk and was talking about how they implemented a DSL over top of their Selenium tests. The CustomInk application makes heavy use of AJAX in designing your apparel. DSLs seem to be the next big wave of hype to follow the RoR one. I can certainly see its place in testing though; especially having written a bunch of Selenium RC test recently. My one complaint about the quality of the video is that the code he shows the audience that implements the DSL does not show up. But that is not too much of a problem as it appears Martin Fowler’s new book will be on DSLs in Ruby (so sayeth the Internet, which is never wrong).
- Writing selenium tests is hard and you have to build something else on top of it
- Use Selenium RC under the hood to distribute to lots of browsers at ones
- Using a DSL you can roll up lots of complexity into single commands
- Make the DSL read as much like sentences as possible
- Emphasis on should be on the ease of test writing, not on ease of maintenance of dsl
- Keep the technical complexity out of the dsl
- When div id’s or javascript creeps into the dsl, then time for more commands
- Lessons learned from creating a dsl
- Make the language look just like your business
- Write your tests / syntax first, then create the backend stuff
- Structure your logs to generate dsl scripts to help reproduce user errors
- A commentary on Selenium – xpath locators is slow; use css selectors for 40% speed increase (or at least he did)
- waitForElement and waitForCOndition (both Selenium commands) are not designed to do asynch testing
- Remember that the dsl is a living piece of code and needs to be treated as such
- Another Selenium comment (or more accurately a browser one) – The CustomInk tests take 2hrs in ie6 but only 45m in firefox
- There seems to be some disagreement whether dsls are for use by the business people to create tests, or to just read them and point out places you are missing
Direct link here
Ryan Gerard has certainly been making the conference rounds this year. GTAC was the third this year where he was a speaker (that I know of and have video proof of). He and Ramya Venkataramu’s presentation was on applying a reputation / trust scale to testers and their test efforts (sorta like ‘karma’ on /.). It’s a novel idea, but I don’t think it has too much real world application. As one person in the crowd pointed out in the Q&A portion, the reputation should be applied to the test, not the tester.
- Test Hygiene refers to how useful and effect is a test
- In this reputation system, what is a Good Test?
- Useful (as judged by the community)
- Easily reproduced and executed by people other than the original author?
- Well commented
- Accomplished what the it states to accomplish
- They mine the test support system (version control, bug tracker, test case management) plus give credit for certain tester actions. These all have different weights in their algorithm (which is also shown, but didn’t interest me)
- Attributes from TCM
- Number of TC written
- Number of TC automated
- Number of TC executed
- Age of test cases (old / less updated have greater chance of being stale)
- Attributes from version control
- Number of checkins
- Fuzzy test Actions that are rewarded
- Execute others’ tests
- Rating others’ test
- Fixing others’ documentation
- Commenting on a test (in order to build community)
Direct link here.
Vivek Prahlad is yet another person associated with ThoughtWorks (but seriously, who isn’t these days?) and is the brains behind Frankenstein. The best way to describe it appears to be like Selenium, but for java swing apps. It’s been awhile since I’ve automated thick client stuff, so it wasn’t all the relevant to my circumstance, but here are some takeaways I had which can apply to any automation project.
- Automate things to run as fast as possible, but no faster
- One cool feature of Frankenstein is when there is an assertion failure, it takes screenshot of the entire desktop
- Functional tests contain the intent of what you want to test
- Have a naming convention for worker threads. Then tools like Frankenstein can watch that thread to see if it is finished instead of having timeouts
Direct link is here
Alex Aiken is the advisor for the work done by Mayur Naik on Chord which according to the site is a state-of-the-art static race detection tool for Java. This is cool because only concurrent programs will really make use of the current (and future) generations of multi-core, multi-proc machines. So if you are writing Java code which does any sort of concurrency, you owe it to your customers to spend the hour to watch the video and see if you can get any value out of this tool (which as of this moment is not publicly available, but they say ‘end of summer 2007 and it is barely that).
Not too many notes on this one because a lot of the discussion focuses on methods / logic and algorithms which don’t really interest me.
- Race – same location may be accessed by different threads simultaneously and at least one access is a write
- There is around a 20% false positive rate (which seems about the average)
- Not all races are bad (there is that whole ‘context matters’ thing again), sometimes they are intentional
- The number of bugs Chord finds is always less than the number of races because there is by definition more than one way to trigger one
- Because it is a static tool, you can use it on 3rd party libraries that your application might be dependent on
Cedric Beust is the creator of Test NG which he wrote in response to his experiences using JUnit. He also is now a Googler which shows that a different way to work for them is invent something cool, useful and scalable.
When I first read the Test NG site, I didn’t see what was so cool that you would want to migrate away from the tried-and-true JUnit, but after this I think it is certainly worth exploring if you are running Java 5 or greater. I think if you are using Java 4 still that it would be worth just sticking with Junit as he admits that it is kinda sketchy in that environment.
There was a lot of stuff that would be of interest to developers (or testers who have a more java-centric slant than I have), but as an influencer, I found the following interesting.
- JUnit’s design forces test independence (by re-instantiating the test class each run
- This forces certain design consideration
- Are you testing or are you junit testing your code
- Annotations are cool – closest thing we have in Python 2.x is Decorators but Annotations are coming in Python 3.
- Only available in Java — so in theory one could use this with jython. Very interesting
- There is a built-in junit mode for ease of conversion
- Cedric has a book coming out – Next Generation Java Testing: TestNG and Advanced Concepts
- There is work on a distributed Test NG where slave machines take independent tests and collate the results – very metaframework-ish
Direct link here.
I was originally slated to beta test Matt and Sean’s GTAC presentation but due to scheduling difficulties on both ends that didn’t happen so even though this was not on my original list of ones to watch, it was put on that list by the time GTAC came around. The original title was Interaction Based Testing, but in posts on the Agile Testing mailing list Matt is referring to it now as Isolation Based Design though I think I might have renamed it to Mock 101. Aside from some serious volume differences between Matt, Sean and the mic in the audience, the presentation is a really good introduction to the concept of Object Doubles (Mock / Stub / Spy / Dummy).
Notes:
- Any good idea can be implemented poorly
- Balance your test strategy (balanced breakfast pattern) with and appropriate mix of techniques
- When you switch contexts, but don’t switch strategies, things get muddled
- Test subsystems in isolation, not individual classes
- Build facades, then mock out the facades
- The way to use mocks is dependent on your context (Boeing vs web 2.0 startup)
- Do what is right for you, not just because Matt (or anyone else) said to do it
- Mocks are not about speed, it is about removing relationships out of the code
- There is some value to having cascading test failures as it showcases your dependencies
- Ask How is this returning for me? What is my ROI like? Is this working for me?
- The Humble Dialog Box
Direct link here
Here are links to the lightning talks from GTAC2007 (or at least the ones that landed in my RSS box). It looks like if users went over their time limit they got pelted with rubber balls…
- Bob Cotton – Ruby Tools for Building Web Testing Frameworks
- Jeanne Boyarsky – Automated Defect Prevention
- Sean McMillan – Taming the Beast: Getting Ugly Code Under Unit Test. (A True Story)
- Steps he took..
- Make the output safe
- Diff production output file against test output file
- De-globalize variables
- Parameterize and collapse functions
- Write unit tests
- Guiding principle: Get a test harness that is as cheap as possible and chip away at it until it (the code) doesn’t suck anymore
- Steps he took..
- Robert Sayre – Don’t Break the Web Automated
- Accept test suites for people’s applicaiton
- slides – lots of tools at the bottom
- Steve Freeman – Listening to Code Smells (What Bad Unit Tests are Telling You About your Code.)
- Keep knowledge local
- If it’s explicit you can name it
- More names means more domain information
- Pass behavior, not data
- Ed Keyes – Sufficiently Advanced Monitoring is Indistinguishable from Testing
- Use a random sampling of queries to system and follow it through the system and validate it
- Logs (even at google) are underused testing resource
- This happened on 10% of user queries beats this happens to me
- Everytime he has deployed monitoring in this way he has found a nasty bug
- John Thomas – DSL: Not Just a Business Language!
- Use a DSL to abstract out dependencies on test frameworks allowing you to switch between Selenium, WebDriver, next cool thing
- Jason Huggins & Simon Stewart – Selenium-RC Vs WebDriver (see what happens when you give a room full of geeks projectiles)
- Bill Schongar – Making Automated Testing Less Frightening For New Testers, Non-Testers and Developers
- Understand 3 things about your users
- Mindset
- Workflow
- Problems they are facing
- Test via social engineering
- Put users on the slippery slope to automation
- Understand 3 things about your users
- Steve Kosanovich – Code Free, Web Regression Testing
When I was at HP, I’d sometimes be able to core dump our application. Of course, when you are logging the corresponding bug the developers really like when you put in the stack trace to help them isolate the problem and the way we would do that is through gdb. This gave me a peek at what the system was doing. Giving me peeks is dangerous. I eventually figured out how to attach to the running process and see what the system is doing.
Now, depending on where you draw the boundary of responsibility of the tester, knowing how to use the debugger might be part of their job description or might just be a fun, geeky thing to do.
Bryan Cantrell works for Sun and appears to the grand high wizard of DTrace. (He is also a Beautiful Code contributor.) DTrace appears to be a debugger for running processes, but on steroids. I was hoping that this would be a DTrace overview / howto, but instead it appears to Bryan showing off his favorite toy. At first I wasn’t impressed, but it ends up being really entertaining with little bits of geek history and insight into the guts of solaris which makes it worth watching even if you are not going to learn DTrace from it. (Oh ya, and he types really well; yes, I have typing envy) Here are the non-DTrace notes I took
- Use prstat which is better than top on solaris (and might explain why top is not installed on solaris by default)
- We’ve gotten sloppy in our coding practices because system resources are so cheap and plentiful. Bryan uses the phrase embarrassment of riches
- In the server room, you do work when there is work to do. The end. Desktop apps tend to break this (see previous point)
- A debugging technique: when a tool (mpstat, top, prstat) produces a columnar output, work on getting the the columns all nicely lined up. Once you get the system in that state, work on the original problem.
- Anonymous memory is ‘cheap’, but performance is not a good reason to use it. malloc is more effective in this context
- There is even instrumentation for python
Direct link here.