Posted on December 31, 2008 in Uncategorized by adamNo Comments »

The number of people who subscribe to the RSS feed has doubled in the last month (to around 315) so I think it is time to do a quick survey to understand the readership.

Click Here to take survey

It’s only five questions; with the 3rd and 4th one being the most important.

Thanks.

Posted on December 30, 2008 in Uncategorized by adamNo Comments »

Via a recent Arming the Donkeys podcast with Bruno Frey is the notion that people care more about recognition than money as a motivating factor. More specifically they care about peer recognition rather than just generic recognition.

Same podcast, but different episode is an interview with Bobbie Spellman around her research into credibility.

  • Confidence begets greater credibility
  • How believable in the past a person has been is a factor of credibility
  • This works in your favor if people don’t know about your past performance

These seem to fit together pretty well. Seems to me that if you tone down your assertions (or caveat them) your long-term credibility will be upheld which would then lead to more peer recognition.

Posted on December 30, 2008 in Podcasts by adamNo Comments »

Kathy Sierra spoke at this year’s Emerging Technologies conference about How to Kick Ass (which apparently starts with a great title). It was recorded. Here are the notes.

  • Where there are passion, there are people who kick ass
  • People are not passionate about things they suck at
  • People usually have to do something at her talks
  • Being better is better
  • Neurogenesis
  • Neuroplasticity
  • A common thread of people who are ‘world class’ at something is that they have the time to put in the effort at the something rather than natural talent. (Malcolm Gladwell say 10000 hours in Outliers.)
  • Rage to Master
  • The passionate people are the ones who spend the money (high-end lens, Pessoa saddles, etc.)
  • Do experts actually know more?
  • Chess masters remember a real board far more than an amateur. Show a nonsensical one and the advantage is eliminated.
  • It is not what the expert knows, it is what they do
  • Brain hacks
    • Exploit your telepathy superpowers
      • Mirror neurons (again)
      • allow us to run a simulation of another person’s brain inside our brain
      • watch people in action, not read a report
      • effectiveness of visualization increases as you increase your doing the real activity (dancer to dancer, ninja to ninja)
      • Watching people who suck; notso good
      • Better to visualize what you would see as you are doing, not you doing it from a 3rd party perspective
    • Reduce interference
    • Control Stress
      • People evolved / survived since they are great predators
      • Being surrounded by predators is stressful
      • Stress, anxiety, etc. will almost always make you suck
      • StressEraser
    • Get to know your brain
      • If you don’t get enough sleep, your performance will be like you were drunk
      • SLEEPTRACKER
      • Real exercise helps you give your best performance from your brain (as compared to the popular Brain Age video games, etc.)
  • Intermittent variable rewards is the most powerful motivational technique
Posted on December 30, 2008 in Video by adamNo Comments »

I’m trying to return to the habit of watching a video on something every couple days (call it an early New Year’s Resolution if you must) and so I’m starting to look at what was queued up before I got too busy to do it.

Today’s video is a talk that Ross Anderson gave at Google called Searching for Evil. In it he iterates through a number of the major types of scams that are online from phishing to 419 to Canadian pharmacies. I’m not sure what I was expecting, but it turns out that that I now have the perfect video to give people who are security clueless about how to spot these kinds of scams. (You know, the people get sucked in by every hoax and ‘warning’ that arrives into their mailbox). I’m not going to iterate over each scam as the video is needed to truly appreciate it, but here are the rest of my notes.

  • Ross’ website has a tonne of links about the security economics
  • The underlying cause for a lot os security failures are the incentives around not doing them
  • Adverse Selection
  • Wicked people go out of the way to get seals of approval from reputable organizations thus making the seal of approval itself a red flag for whether something is a scam
  • To break up a system, target the bottlenecks
  • Irrevocable payments are a common denominator of evilness (such as Western Union)
  • Assumptions about identity validity / assurances are highly geographic specific (see the Chinese gymnastics team at the recent Olympics)
  • If you can program it, it is administration. Everything else is management
  • A bank is just a perl script these days
  • Never underestimate the stupidity of the public
  • Plagiarism detection is a useful tool in identifying evil sites
  • The people most trusted by the public are academics. Fool then and you inherit their trusted followers



Direct link here.

Posted on December 29, 2008 in Uncategorized by adam2 Comments »

I’ve started to convert the EPA’s Fuel Economy data into CSV so it can be imported nicely into our database for use in calculations. It has been nowhere as easy as it should be. If you are providing data to customers or the public at large, please do not use the EPA as your model. The following list is my new checklist for testing data exports (and could serve as a step-by-step guide for parsing the 1978 EPA guide.)

  • Data Sets – One data set, one file. It’s simple really; do not put the data for California and the other 49 states in the same file.
  • Data Formats – One format, one file. If you feel like you have to display data in two different ways, take a step back and think about it. Either put the data into separate files for each format or create a format that works for both.
  • Portability – File dumps from your old mainframe or mini in their native format is not portable. CSV (or similar delineated file) or XML are your best choices here. .xls is also not as universally portable as business folk like; it is also not directly importable into a different system (without COM magic)
  • Format Consistency – From 1978 – 2009, there are about 4 or 5 different formats that information is provided in. Please use one format for all information. Information that is not easy to use, will not be used.
  • Embedded Data – One column, one piece of information. I understand that space used to be a premium in systems in the late 70s through about the early 90s, but there is really no excuse for overloading a single data point with multiple pieces of information. 2DR- 68/ 7 tells us that the particular vehicle is a 2DR body type, has 68 cubic feet of interior space and 7 cubic feet of trunk (or cargo) space. Embedded data makes it hard to do analytics on it.
  • Consistent Data – The ‘Fuel System’ section of the data tells whether a car is FI (Fuel Injected) or a number. That number is the number of barrels in the carburetor. The opposite of FI is C, not a number. If you really want to tell use how many barrels there are, put that in a different column.
  • Unnecessary Characters – Things like $ are unnecessary to include in the raw data. If you really need to communicate that it is in dollars put that in the column name, or better still, use an accurate heading that makes it clear.
Posted on December 24, 2008 in Uncategorized by adamNo Comments »

This paper was mentioned on a recent Stack Overflow podcast and while it deals explicitly with writing a philosophy paper, it is a good read for testers as well. Since I can see the click-out rates from the analytics I know that they are pretty bad so I’m doing the Coles Notes version here.

It starts out immediately with a ‘context matters’ blurb:

Most of the strategies described below will also serve you well when writing for other courses, but don’t automatically assume that they all will. Nor should you assume that every writing guideline you’ve been given by other teachers is important when you’re writing a philosophy paper. Some of those guidelines are routinely violated in good philosophical prose.

I think a lot of prescriptive books out there should begin with a warning like this.

So what should a philosophy paper include? Well, a reasoned defense of some claim according to the author who then gives some examples. Now think about this in the context of a bug reports. Essentially the same guidelines apply.

Once you have your thesis (This is a bug because it violates the internal consistency oracle), you then develop it by:

  • Criticize that argument or thesis
  • Offer counter-examples to the thesis
  • Defend the argument or thesis against someone else’s criticism
  • Offer reasons to believe the thesis
  • Give examples which help explain the thesis, or which help to make the thesis more plausible
  • Argue that certain philosophers are committed to the thesis by their other views, though they do not come out and explicitly endorse the thesis
  • Discuss what consequences the thesis would have, if it were true
  • Revise the thesis in the light of some objection

Now read the bullets above while mentally swapping out ‘bug’ for ‘thesis’. Still a pretty good fit.

Two other attributes of a paper are:

  • A good philosophy paper is modest and makes a small point; but it makes that point clearly and straightforwardly, and it offers good reasons in support of it – No Fluff, Just Stuff. IEEE 829 be darned. The length something ends up being is exactly the length it should be.
  • Originality – The nugget in this section is Your critical intelligence will inevitably show up in whatever you write.

Now that we know what should be in the paper, Major Guidelines are given:

  • Make an Outline – Make your checklists ahead of time as well as inventory the heuristics you think might apply to this testing task. Rarely do we do our best work under stress so if you can give yourself any chance of working more effectively it should be taken. This is the primary reason I do any sort of pre-test planning.
  • Make the structure of your paper clear – Focus any documentation at the correct audience; Your reader shouldn’t have to exert any effort to figure it out.
  • Be concise, but explain yourself fully – It would be rather ironic to explain this one aside from Nothing should go into your paper which does not directly address that problem and take special pains to be as clear and as explicit as you possibly can.

Those guidelines are nicely summarized as follows:
In fact, you can profitably take this one step further and pretend that your reader is lazy, stupid, and mean. He’s lazy in that he doesn’t want to figure out what your convoluted sentences are supposed to mean, and he doesn’t want to figure out what your argument is, if it’s not already obvious. He’s stupid, so you have to explain everything you say to him in simple, bite-sized pieces. And he’s mean, so he’s not going to read your paper charitably.

The rest of the paper is pretty domain specific but does have a couple nuggets in it:

  • Try to anticipate objections to your view and respond to them
  • Your paper doesn’t always have to provide a definite solution to a problem, or a straight yes or no answer to a question
  • It’s OK to ask questions and raise problems in your paper even if you cannot provide satisfying answers to them all
  • There’s no need to warm up to your topic. You should get right to the point, with the first sentence.
  • You may use the word “I” freely, especially to tell the reader what you’re up to
  • Even professional philosophers writing for other professional philosophers need to explain the special technical vocabulary they’re using. Different people sometimes use this special vocabulary in different ways, so it’s important to make sure that you and your readers are all giving these words the same meaning. Pretend that your readers have never heard them before.
  • Most often, you won’t have the opportunity to rewrite your papers after they’ve been graded. So you need to teach yourself to write a draft, scrutinize the draft, and revise and rewrite your paper before turning it in to be graded.

So good. Though, the tester in me feels compelled to point out that the html, specifically the ol tags appear to not be coded correctly in the paper.

Posted on December 23, 2008 in Uncategorized by adamNo Comments »

I used to periodically clean out my mailing list boxes and post the bits of messages I thought were interesting. Here is June 2008 to now of the Agile-Testing one.

  • Capturing Requirements
    • There is very little you can do to capture requirements up front – correctly. Instead we embrace change and work in small increments, getting constant feedback from the customer. … We can’t get it right up front because we didn’t understand. – Mark Levison
    • Fail Fast, High Feedback loops, Adapt to change – Matt Heusser
    • And then have the customer actually use the resulting working software right after each aspect of the feature is implemented to verify that the implementation is what they want and need. – Steve Gordon
    • … we don’t have to capture them all from the get-go, but we can make discoveries about what’s important (and what isn’t) as we go. – Michael Bolton
  • Name magic is very powerful, for good and for ill. – Michael Bolton
  • A Selenium/FIT mashup: fitinum
  • ShuHaRi
  • … you might be able to do some more proactive testing by scanning the CSS or inline “style” attributes for potential issues. The incompatibilities are relatively well-known – if you could parse the styles for “danger signals” that would give you hint on where to look. – Dave Rooney on cross-browser testing
  • Testing Flex Apps
  • Wisdom begins when we discover the difference between “That makes no sense” and “I don’t understand”. – Mary Doria Russell (via Ron Jeffries)
  • There’s the stuff we know we know (addressed by unit and functional tests), the stuff we know we don’t know (addressed to some extent by integration and smoke tests) and the stuff we don’t know we don’t know (addressed by exploratory testing) – Rumsfeld as interpreted by Titus Brown
  • Talking about Guidelines
  • Agile Acceptance Testing Video
  • Scott Ambler’s The Role of Testing and QA in Agile Software Development
  • Strangler Applications paper by Mike Thomas
  • Some things that make up the Testing Mindset (from Michael Bolton)
    • to think critically about the software
    • to remain doubtful and cautious in the face of certainty from others;
    • to question, rather than to confirm
    • to assist the team in not being fooled
    • to help in identifying factors other than functional incorrectness that can threaten the value of the product
    • to help in identifying other usage modes of the product that can add to its value
    • inventing and/or creating tools that can help to evaluate the product more quickly
    • to dig up and expose assumptions that have been buried and that might be invalid or inconsistent
    • to recognize that inductive proof that “it works” (lots of passing tests) isn’t proof at all
    • to keep searching for non-white swans, even when all our tests seem to show that all swans are white
    • to recognize that things can be different
  • Beginning with a modest goal and refactoring as I went along, I was able to construct a harness that was just powerful enough for the task at hand but flexible enough to grow to meet our future testing needs. – Kevin Lawrence
Posted on December 22, 2008 in Podcasts by adamNo Comments »

Another podcast which would not seem to be interesting to testers, but could be is Dell in Biotech.

  • Lab notebooks don’t scale – The big thing in the testing world seems to be carrying around moleskine notebooks. But any information in those are potentially lost when you leave it at the hotel, or is unaccessible if it is in your study in Toronto and you are in Seattle. And of course it is private. Private notes are certainly necessary, but some of the stuff really wants to be public. Wiki! Blog! Now!
  • Chronology is subjective – Most of us work on multiple things/projects/whatever at one time. Our thoughts and discoveries are chronological to us, but since they are spread out and interleaved between multiple items they are not chronological to that thing.
  • Hardware + Software + Services = Solution – I’m sure there is a testing linkage here somewhere, but I can’t find it right now. When they are talking about this they mention that customers don’t want finger pointing between vendors which seems like it might be close to what I am grasping for here.
  • CFR 21 Part 11 – Practically speaking, Part 11 requires drug makers, medical device manufacturers, biotech companies, biologics developers, and other FDA-regulated industries, with some specific exceptions, to implement controls, including audits, system validations, audit trails, electronic signatures, and documentation for software and systems involved in processing electronic data that are (a) required to be maintained by the FDA predicate rules or (b) used to demonstrate compliance to a predicate rule. (source: Wikipedia)
  • Scientist wants – This has been mentioned before, but to research scientist, they put in a sample in and they want information out. And they are not necessarily so concerned about what happens in the middle … they are more interested in the result of their … hypothesis
  • Virtualization – a nice description of what it is and why you need to care
  • Secret Handshake – If you don’t talk science, and research and data you won’t be able to work successfully with the scientific community
Posted on December 22, 2008 in Uncategorized by adamNo Comments »

We are at our core, essentially a data company. My focus naturally turns to the quality of our data a fair bit. In the last week or so I have been looking at the data we have around car emissions and got a little freaked out. But with a little digging I realized the origin of the problem is the source data itself. Here are the, lets call them lessons, I’ve experienced since I started looking at this issue.

Lesson One: Your data is only as good as what it is made of

  • The US, Australian and UK data are all in completely different formats
  • All three release information on different schedules
  • Only one (the Australian one) make it available in an easy programatic manner
  • The UK data does not explicitly list the model year (just ‘since model year 1995’ as part of the description)

Lesson Two: Look at the code for quality smells

  • To go MVC for a second, if your model has a switch regarding a database column, then that is a bad smell
  • I’ll be looking at the ‘Fuel Type’ and ‘Transmission’ columns first; they’re rank

Lesson Three (A): Define how you want the data to look like

  • Right now I’m trying to figure out which information is common to all three sources
  • I’ll use that as the grounds for massaging our data into some common order

Lesson Three (B): But only with serious consultation

  • I could right now ‘fix’ the issue with the ‘Fuel Type’ column, but I would likely just break a bunch of stuff
  • I know which stuff is most at risk, but I’d rather not do it in the first place
  • So I’ll be bouncing things off the environmental scientists that use this data

Lesson Four: Big Bang is not just bad for deployments

  • I have identified 5 or 6 different things that affect the quality of this data, and all will be fixed. Eventually.
  • One problem requires one identifiable, and rollback-able fix

Lesson Five: Best offense is a good defense

  • Of course, I don’t want to have to do this again, so I’ll have to identify and implement the proper checks for when the data gets updated to make sure I don’t have to pay (as close) attention to this area in the future.
  • This may not be as easy as it seems as we are expand out offerings and services. Is it in Java? Rails? Right in the database? All three?

Lesson Six: Thank goodness I can script

  • I first learned Perl when I need to hack up a data file. This is no different a task except I’ll do it in Ruby likely.
  • It would be really nice if all these organizations would decide on a standard information set and format and make use of it.
  • Even within the same organization from year-to-year; I need to create 5 or 6 different scripts just to get a common representation of EPA data alone.
  • Not to mention the UK or Australia or ???

Lesson Seven: Null is still evil, but sometimes necessary

  • Null is Evil; there is usually a better value to put in
  • But when you are backporting a common view to existing data you might not know what the better value is
  • Better does not mean magic
  • The code might be making use of that null somewhere that you don’t (yet) know about
Posted on December 22, 2008 in Podcasts, Uncategorized by adamNo Comments »

Earlier this week I listened to an interview with Herbert Heedleman. You wouldn’t think it would be that interesting a topic, especially , but it is a pretty amazing story he weaves. Aside from bringing up the question of how people choose their careers, there was a few points relevant to testing.

  • Do good science (testing)
  • Purchasing opinions from ‘experts’ is Bad
  • Make sure things are defendable
  • And not vulnerable to attack
  • Go where the data takes you
  • If you believe in your work, you have to express it
Next Page »