All pairs testing is a combinatorial technique to achieve adequate test coverage where testing all possible combinations would be resource intensive and the risk of less than complete testing is acceptable. The fundamental concept is that you test pairs of variables, rather than all possible combinations to achieve good statistical coverage.
Consider the following hypothetical example of testing a suite of office applications on multiple platforms, to verify a small code change to the way documents are exported.
The applications run on desktop and mobile devices with either a native app, default browser or Chrome, using one of two formats. The variables are:
Application | Device | Client | Format |
Word processor | Windows | Native app | Microsoft |
Spreadsheet | MacOS | Default browser | OpenDocument |
Presentation | iOS | Chrome | |
Android |
At first glance, there a 71 possible combinations (3x4x3x2) less one for the duplicate case of Chrome on Android which is the default browser. If the risk is high, testing all possible combinations may be justified but for a small code change with a complex test, 71 tests could take a considerable time to complete this thus warrants a more pragmatic test approach.
Creating the first pairs
Select the most most numerous variable first, which is ‘Device’, followed by the next highest (although in this example ‘Client’ and ‘Application’ have an identical number), and then add first value, first variable with all the values from the second variable. Then repeat for the rest of the values in the first variable, iterating over the second variable.
Device | Application |
Windows | Word processor |
Windows | Spreadsheet |
Windows | Presentation |
MacOS | Word processor |
MacOS | Spreadsheet |
MacOS | Presentation |
iOS | Word processor |
iOS | Spreadsheet |
iOS | Presentation |
Android | Word processor |
Android | Spreadsheet |
Android | Presentation |
Next, add additional columns to pair up the third and forth variables, iterating through the values:
Device | Application | Client | Format |
Windows | Word processor | Native | MS |
Windows | Spreadsheet | Edge | Open |
Windows | Presentation | Chrome | MS |
MacOS | Word processor | Native | Open |
MacOS | Spreadsheet | Safari | MS |
MacOS | Presentation | Chrome | Open |
iOS | Word processor | Native | MS |
iOS | Spreadsheet | Safari | Open |
iOS | Presentation | Chrome | MS |
Android | Word processor | Native | Open |
Android | Spreadsheet | Chrome | MS |
Android | Presentation | Chrome | Open |
Rearranging the pairs
There is some duplication – Word processor is always paired with Native, and Presentation is always paired with Chrome, and there’s two Android + Chrome pairs. Let’s rearrange the clients with a non-repeating pattern (not just [1,2,3]), and eliminate one of the Android + Chrome pairs:
Device | Application | Client | Format |
Windows | Word processor | Native | MS |
Windows | Spreadsheet | Edge | Open |
Windows | Presentation | Chrome | MS |
MacOS | Word processor | Chrome | Open |
MacOS | Spreadsheet | Safari | MS |
MacOS | Presentation | Native | Open |
iOS | Word processor | Safari | MS |
iOS | Spreadsheet | Native | Open |
iOS | Presentation | Chrome | MS |
Android | Word processor | Chrome | Open |
Android | Presentation | Native | Open |
Review to ensure their are pairs for the values of each variable:
All devices are tested natively, with the default browser, and with Chrome. All applications are tested at least twice on each device. All applications are tested natively (albeit on different platforms), and on a browser. All applications are tested with both document formats.
If there were any specific risk areas, you may choose to swap some pairings around or add some additional cases to cover those – for example to increase coverage of the more popular devices or application types.
In this example, Edge only appears once. If that is not acceptable, add another case with Edge paired to a different format and application.
We now have 11 test cases (or perhaps a few more if necessary), but certainly less than a potential 71, a significant saving in time while accepting some risk that one of the untested combinations may be defective.