Product Management and QA: Constraining the Test Cases, Part 1/2

I invited DJ Dunkerley back in late February to provide two posts on Project Management (there's a profile at this link). Here's another series on Product Management and QA, to give you that needed edge.

As anybody who has ever tried to build test cases for an application or system of moderate complexity knows, you quickly run into the well-known multiple-branching problem. Essentially, it's very easy to generate more test cases (by tweaking different parameters) than is logistically possible to test. Everybody is well aware of the tester's lament: They never get enough time to complete all the testing. Well, the multiple-branching problem ensures that matter how much time you give the tester, they never will get enough time. I will illustrate with a simple example.

Let's say you have an application with 8 options or choices (or branches) on-screen. Clicking on one of the 8 options takes you to another screen that presents you with more choices. Each option, or new screen presents you with an average of five choices.
Let's pretend the app goes only one option deep i.e. when you click to another screen, you cannot click to yet another screen which presents you with more options.
Let's pretend the app doesn't take in external data (i.e. data is created and manipulated solely within the context of the app).
If we want to generate a test plan that covers all the parameters (100% code coverage), you see that we come to: 5x5x5x5x5x5x5x5= 390,625 test cases.

Now even if you script all those test cases so they run automatically, I don't think you can execute 390,000 test cases in a reasonable amount of time: Assume that a test is executed once a second, and it would take you about four and a half-days, to do one complete run.

Of course, you can pretend that each branch has absolutely no dependencies on other branches. Make that assumption (a big one) and you cut the cases down to 40. Wow, guess what happens: Most test plans constrain the test cases by limiting the number of parameters picked when doing a complete system test (aka "sanity test"). Then all other features (hopefully) are tested in a group where the dependencies are obvious. And the branches that have hidden dependencies don't get tested and that's where most of your bugs creep in. This happens because everybody believes you can't get anywhere close to 100% code coverage in your tests because of the branching conundrum.

However, there is a testing theory that does lead one to believe that you can get upward of 90% code coverage. It's called pairwise testing or CATS (Constrained Array Testing System). There is a web site that explains the theory in mind-numbing detail here, but in my next post I will try to explain it in plain English.


This is useful material DJ. I’m looking forward to Part II/II.

Thank you,
Stephen Ibaraki

Comments (0)

Skip to main content