Posted on July 31, 2007 in Quality by adamNo Comments »

I think my first introduction to the idea of software testing was in an old Nintendo Power magazine article in which they talk to the folks who test the games. (And by old I mean ~ 1990). For a while I thought it would be a great career path and took meticulous notes dungeon crawling etc.

Time passes and I am very glad I didn’t get into testing cartridge based games. I like the ability to patch things too much. Anyhow, via Jen Kilmer is an article called Testing Video Games Can’t Possibly Be Harder Than an Afternoon With Xbox, Right?.

OUCH! Certainly makes QA seem the bottom of the totem pole all right. I’m actually more amazed that that type of testing is even done here. That kind of boring, repetitious testing is exactly the type of thing that can (and should) be outsourced. That way those of us who do actually know what we’re doing and provide higher value to our employers can start to improve the image of the profession.

Posted on July 31, 2007 in Python, Quality by adam3 Comments »

One tool we used to win the CAST2007 testing competition (before we were disqualified…) was log-watch by James Bach. Log-watch is a perl script which watches a log file for instances of a string, or regex and when it encounters it plays a sound allowing you to actively, yet passively monitor a logfile. It does however have two hits against is. First, it’s in perl <grin> and second it is windows only. 3 hours and a couple false starts later I can now announce pylog-watch.

Pylog-watch is a straight port of the original log-watch script to python and while the audio works only on windows at the moment, there is hooks for audio on linux, freebsd and solaris — I just don’t have access to any of those machines right now so didn’t finish them.

Just as the original does not require you to install perl to run it, I’ve made a standalone windows executable using py2exe (which is remarkably easy to use). You can get the zip from over here.

For the more programmatically inclined, you can grab the script out of svn here or view it below the cut.
(more…)

Posted on July 30, 2007 in Housekeeping by adamNo Comments »

Now that the site has landed in it’s final home, the shirts have been updated. I’ve also added long sleeve ones for those chilly winter months.

To paraphrase King Apparatus; buy my stuff, make me rich.

Posted on July 28, 2007 in Quality by adam2 Comments »

Karen N. Johnson‘s most recent post discusses types of passwords and their strengths. That triggered some memories.

Before being sold to HP, I worked for Baltimore Technologies (world’s largest certificate authority vendor outside of the US during the crypto export restrictions years). During the official training they taught a trick for doing ‘secure’ passwords (ones that require letters, numbers and a symbol) such as in CA’s. It was word, then number, then symbol above the number, so password3# for example. It would be interesting, if not scary to know how many CA’s have their private keys protected by a ‘secure’ password like that. Especially in light of the second story.

About 10 -12 years ago when Cisco was first rolling out their security products I attended a salesinar with one of their penetration team leads. He had at his disposal a rack of machines used for brute forcing networks that had only been thwarted by passwords for 3 or 4 minutes at the most. When going against a Cisco device there was logic in the rack to try ‘frisco’ as the first password since that was used as the example in the manuals — and it worked > 50% of the time; even on border machines.

I think where I am going with this is not only do you have to look at the password algorithms when testing, but the human element as well. Does a manual give a password example? Is there one on the screen? You might be inadvertently causing users to use a insecure password.

And of course there is how passwords are stored in the database. And if there is an internal password, where and how is it stored?

Posted on July 28, 2007 in Quality by adamNo Comments »

Bj Rollison‘s most recent post has a bit of a nugget to chew on. Should test cases have duration, or a reasonable approximation of how long it should take an experienced tester to complete a particular test?

I’m not sure. My estimate are almost always in terms of ‘duration of feature’ which is a experience based number rather than a sum based upon the total individual tests. I also tend to work from lists of potential interesting testing ideas rather than a predefined set of tests in a pseudo-session-like manner.

It’s worth thinking about though.

Posted on July 27, 2007 in Quality by adamNo Comments »

I’ve been saying for awhile now that Firefox is one of the better tools a tester can have in their toolbox when dealing with web applications largely due to LiveHTTPHeaders and Firebug. Looks like it is time to add another blade to the jackknife with the release of YSlow.

It looks like the rules that it is checking against form the basis of the new book High Performance Web Sites: Essential Knowledge for Front-End Engineers.

More information can be found on the O’Reilly Radar post that alerted me to it.

Posted on July 27, 2007 in Quality by adamNo Comments »

The August 2007 issue of Software Test & Performance magazine is available now. I had high hopes for the cover story, Sit! How to Train Your Team for Unit Testing but some poor editing / packaging of the pdf seems be done poorly and a large section of the article is repeated. Hopefully they will address it and re-release it. The other articles seem to have better editing and some handy (if somewhat obvious) tips as well.

Posted on July 26, 2007 in Quality by adam1 Comment »

This was supposed to be a series on how Google hires folks in the Test / QA side of things. After getting past the step listed below we (my wife and I) decided it would be too disruptive to move to where a Google Engineering office is so I ended the process there. It would have been interesting to go through the entire process for the ego boost of getting a job offer and a trip to Mountain View, but going to the park with the kid seemed like a better use of time than answer syntax questions about Python for 2 hours.


Before I start this post, I should note that Google approached me about starting the hiring process so I’m not sure how you would get the process going without having someone think you would make a good candidate already. I also do not know if this alters the process in any way.

The phone screen is an hour long conversation with an internal technical recruiter. The key word in that title is technical. The person I was speaking with knew his testing philosophies and could code (we were discussing python for awhile) to a degree beyond what you could train an HR person. They hold they gatekeeper role because you have to convince them of your might before you enter into the full process.

Google’s hiring goals right now are all about bodies. Most organizations have a role in mind when they are hiring, but Google takes a different approach in that they only determine which role you might take at the end of the process. And even then they give you a choice of 2 or 3 groups in multiple locations. This was pointed out a couple times. Statistically 30% of the positions will be in Search, 30% in Advertising and the remainder will be in Content and Communication (Gmail, Picassa, Maps, etc.) and given that 4k of the 8k developers work out of Mountain View, the bulk of positions are there as well. That said, Google operates on a project basis with projects lasting 9 – 12 months so if you are not feeling the motivational love in that group there is opportunity to switch groups.

One thing that has always confused me about Google’s test group is it’s titles, or lack thereof. My understanding of the two main ones is now something like

  • Software Engineer in Test – Develop testing tools and have a pretty broad application scope. Harry Robinson, the Model Based Testing guru is one of these.
  • Software Quality Engineer – These are people with great testing skills and a pretty good handle on at least one programming or scripting language. They are also the ones who write the test scripts for products using the tools the Software Engineers in Test produced. Jason Huggins the creator of Selenium is one of these, though I would have guessed he was the former.

There is a pretty fuzzy line between the two though apparently. It all depends on the composition of the group you are with and the testing challenges you are facing.

At this point the recruiter had me give my pitch as to what I would bring to the table and we discussed that clarifying some points, etc. Standard boring stuff. Blah, blah, blah.

Now the recruiter is driving again and we’re reminded that first and foremost Google wants techies that just happen to also test. Using a 0 – 10 scale, he had me rate myself on C, C++, Java, SQL, Perl, Python, Shell, Unix, OS X, win32 (COM, etc), networking, testing web applications, testing client server applications, test automation, version control (Perforce specifically). The sclae they use is pretty well thought out and has some built-in BS detection.

  • 0: no experience
  • 1 – 3: Can read and understand
  • 4 – 6: Can read and understand as well as use it to create something from scratch
  • 7 – 8: Extemely proficient
  • 9: ‘Expert’
  • 10: Industry recognized expert (wrote a book, on the speaker circuit)

As you can see, the rating are going to fit pretty nicely on a bell curve. If you claim to be an 8 or 9 in every subject they know you are trying to con them. Same with a 10 — they already know you are a 10 before asking the question. I’m guessing too that the desired distribution depends on the type of role you are being considered for. As someone in QA, being a expert at producing Java code from scratch is not necessarily the most important skill, but being able to read and understand what the developers have written most certainly is.

From there you move into the ‘calibration’ questions which are designed to help select who will be on the group conducting the technical calls (there are two with the second there to verify the findings of the first). These also give you an insight as to how un-fun the technical interview promises to be.

  • Python – The difference between a list and a tuple? I guessed the right answer, and added ‘one is useful, the other is not’
  • Python – A function with one or more yield statements is used for? Apparently the answer is generator which is part of python I’ve never used before
  • Python – What does xrange do? I’ve always used range, but according to this article it has to do with optimization so it makes sense that you would use it instead of range in the Google context given the size of their data sets
  • Unix – How do you determine the IP address of a machine?
  • Networking – What are the 3 packets used in establishing a TCP/IP connection?
  • Testing – What is code coverage?

As mentioned, the rest of the process is 2 technical interviews (2h each, code to be produced using Google Docs) and if those are deemed acceptable then the candidate is flown to the main Google campus for in person interviews. And then the offer / placement process. All told the average length is 5 – 7 weeks.

Posted on July 22, 2007 in Quality by adamNo Comments »

I’m not sure where I found this list, but at some point I saved a copy of it on my desktop. Seems like it might come in might come in handy to an audience larger than just me. Below the cut is a table of 50 currencies and their common abbreviations.

(more…)

Posted on July 20, 2007 in Quality by adamNo Comments »

While waiting for a server to restart I went through the issues of Software Test & Performance and SD Times that were cluttering up my desktop. The non-italic stuff is direct from an article somewhere in the issue.

SD Times – June 15, 2007

  • 6th Sense is a hosted application that communicates with plug-ins that work with more than two dozen IDEs. This plug-in tracks all of the actions taken by a developer during his or her workday. Todd Olson, 6th Sense’s co-founder and CTO, explained that the product measures two types of activity in the IDE: active time and flow time. The first of those measurements monitors the actual work being done inside and outside the browser: Is the developer awake, typing, moving the mouse and interacting with the repositories? The second of those metrics kicks in only once the developer has hit his or her stride. “This comes out of a lot of academic research, which says that when a knowledge worker is focused for 20 minutes or more on a project, they’re in the flow [and] they’re fully immersed,” said Olson. “We’re measuring these flow-time units. This can also help you determine whether or not there are environmental issues” – Seriously, just how badly are these metrics going to be abused by management. I can’t imagine working in a place where the tools I use are instrumented to report back to my boss how much time I spend in each. Sure, the information would be useful if someone never hits flow, but theres very little chance this will be used as designed

SD Times – July 1, 2007

  • Automating the Virtual Testing Lab for Fun and Profit
  • In buying Watchfire and integrating its tools with its Rational development platform, IBM will take a leading role in proactive application security by spreading the word on why it’s essential to address security concerns early in the application life cycle, instead of simply relying on firewalls that aim to block intruders at the network door, said Lanowitz.
  • My build tool should not be determining which tools I can and cannot run.

SD Times – July 15, 2007

  • Although further security acquisitions are expected, Rende claimed that HP was not trying to become a security vendor, arguing that security assessment and vulnerabilities are synonymous with defects. “Security assessment falls under the quality side of our business.”
  • SPI’s products include Dev-Inspect, QAInspect and Web-Inspect; these run throughout the life cycle of Web applications to identify security vulnerabilities and integrate with HP Quality Center software. So is Quality Center part of a metaframework?
  • Veracode has launched its Software Security Ratings Service, a service that scans binary code and benefits developers using SOA by allowing them to test code being drawn from multiple programs, the company claims. Software is tested on an A–F letter grade system, receiving three different grades based on a scan of binary code, dynamic analysis testing and manual code review that is carried out by a penetration tester

Apatar introduced in late June its namesake enterprise data mashup tool, with capabilities that include connectivity to Microsoft Access and SQL Server, MySQL, Oracle, PostgreSQL, Sybase and XML. It provides a single interface to manage integration projects, and can run on multiple platforms, including Linux, Mac OS X and Windows. So enterprise mashups are the Web 2.0 version of middleware?

  • SilkCentral Test Manager 2007 delivers support for virtual lab environments, and allows a user to run a test across multiple configurations in an automated fashion, while VMware Lab Manager provides the ability to create a test run that works across multiple platforms and configurations, without the need to set up multiple virtualized images
  • STP – July 2007

    Next Page »