I was introduced to a new computing term the other day: bug safari. I wasn't convinced by the idea, but I'm keen to hear others' thoughts. Why not write a comment once you've read this article. Tell me, and your fellow readers, what you think.
I've been doing some work with a company named Space Time Research (STR). They're an Australian company who produce some rather good tabulation and visualisation software.
In a 2009 STR blog entry, Jo Deeker & Adrian Mirabelli describe how the STR quality team used a "bug safari" to enhance the quality of an upcoming release. Upon reading the blog entry for the first time, it sounded to me like they just arranged for some people to randomly use the software and deliberately try to find bugs. But reading it again more carefully I could see some structure and planning elements, and I could begin to see some merit.
Conventional, structured testing is focused upon the use of tests scripts which are themselves traceable to the elements of the requirements and/or specification. In this way, you can be sure you have planned and scripted at least one test for each functional element or design element (I shall talk about the V-model in a later blog article). On the face of it there is no value in any further testing since you believe you've tested everything. But software often incorporates complex paths through it, and testing rarely produces a comprehensive test of all paths (testing produces confidence, not guarantees). So, I can see merit in allowing users to go "off piste" with their testing and spend a limited amount of time just using it and trying to break it.
As I say, testing is about producing confidence not guarantees, and I see that bug safaris can generate confidence in some situations.
What do you think? Share your thoughts; write a comment...