I didn’t attend too many sessions at this year’s CAST; in fact I only have notes on 2. (I was in a meeting in the office for another, presented in two others and was busy being ridiculously nervous during a third.) I know enough people in the community now that conferences like CAST are not really about the sessions anymore but about the conversations in the hallways and at meals about testing.
One session I did attend was about testing software written by, and for scientists (something Greg has me keeping an eye on) by Diane Kelly and Rebecca Sander.
- Scientists don’t usually know the software engineering terminology and software experts don’t (usually) have Ph.d’s
- One medical device programmer would actually get scrubbed in for surgeries to see how their device was used. (Yet another reason why I won’t test medical software)
- Knowledge exchange is critical for successful testing
- Oracles used by scientists
- Professional judgement
- Data based
- Benchmarks (relative to output of other algorithms)
- The code irrelevant in terms of correctness to the model
- Which leads to scientists being unconcerned with the quality of the code
- Usability is done through documentation rather than through testing with the aim to improve
- Theory Risks > Usability > Code RIsks
- Testing strategies in one [scientific] domain do not necessarily apply to others
- Scientists cannot separate the model from the code that produced it. Soooooo, if you test the model you have tested the code.
- Scientists and developers tend to not trust each other anymore
Kinda makes you wonder about all the science we rely on. You can read the full paper for all their research and findings.