One of the services our engineering team maintains takes a stream of live data combined with human input, makes complex mathematical calculations based on the data and then generates data that is consumed by subsequent services.
The human input is made through a relatively simple UI, but the testing challenges of the UI are anything but simple – the service output is non-deterministic, and trends in the output data are dependent on both the feed state and human input.
Therefore, writing any kind of prescriptive test cases (automated or manual) would be very difficult, and require constant maintenance as and when the internal algorithms change.
As the internal users work closely with the engineering team in-depth testing is often not required as UAT is sufficient, but this is a risk in that we have no documented testing for the whole UI and the experience of individual facets of the interface is spread amongst the team.
Lightweight documentation – feature maps
Inspired by this blog post: https://fishoutthebox.medium.com/stop-writing-step-by-step-test-cases-e68026aa844e from Melissa Fisher (@fishoutthebox), I revisited the application and considered how we could document the behaviour of the service when the various UI elements of the interface are interacted with.
Mapping behaviour to features
By way of example, consider a (fictitious) commodity market simulation (perhaps as part of an MMORPG) which follows the laws of supply and demand – where there is ample supply, the price will be lower for both buyer and sellers, but when demand is high, sellers can dictate a higher price.
Writing test cases to cover all the possible behaviours around ‘limit orders’, ‘stop orders’, ‘good till cancelled’, ‘fill or kill’, etc. and how the market behaves after each test would require very specific initial conditions and actions, and thus be constrained by very input and output values.
Using exploratory test techniques may be equally unwieldy, as by their very nature they rely on experience and business domain knowledge, and have low repeatability.
Forming a class of test documentation between the rigid steps of prescriptive test cases and the guidance of heuristics in exploratory testing, we can list the feature, typical inputs and interactions and the expected behaviours in tabular form:
|Feature||Inputs & Interactions||Behaviour|
|Buy Limit (filled)||Set a small buy order for a commodity below the current price, then add sufficient supply at the target buy price to close all open buy orders||Buy limit closes|
|Buy limit (unfilled)||Set a very large buy order for a commodity at the current price||Buy order closes all existing sell orders, and remains open|
|Sell order (unfilled)||Set a sell order for a commodity above the current price||Sell order remains|
This format might only be relevant for some very specific classes of testing, but this (and Melissa’s post) is a good example of how to challenge established thinking on how to test, and examine whether there may be a better approach.