At the dawn of my testing career, before the widespread adoption of the internet and interconnected systems, the software project I first worked on was distributed as a boxed product (with a printed manual!), for installation on internal systems – connected to each other and perhaps some satellite offices, but not to the whole world which we now take for granted.
As the process of manufacturing a few thousand CD’s (and before that magnetic media) for distribution was incredibly expensive, the final version of the software that was released to manufacture had to have been thoroughly tested as any subsequent changes would have to be provided on separate media (patches on a floppy disc, posted out to individual customer sites).
Testing the software was entirely manual, on a version that was largely complete with a period of time allotted to find and fix any defects before release (the code-and-fix development model).
The test team (both of us!) were very much separate to the development team (a non-integrated test team) and only voiced our opinions on the software after it had been built, whilst we were testing the new version.
Testing took a great deal of time while defects were raised and fixed, and the software rebuilt and retested until all tests passed, at which point it could be released.
Thankfully, the process of software development and testing has moved on and in our interconnected world, how software is consumed allows some sectors of business to rapidly and continually update their services. Individual online devices (a desktop PC, phone, console or other smart device) can now be periodically updated with relative ease.
This does not mean overall quality is now less important – quite the reverse – competing businesses in the online world try to out innovate each other to increase market share, and there is an emphasis on security that now only exists because your platform or device can be attacked by a teenage hacker on a different continent. Before the internet age, new products with enhanced features may have been released yearly at a minimum, and security generally meant making sure your workstation was locked. Shared servers were password protected, but encryption was not part of commercial operating systems – network traffic was easily intercepted plain text (if you could plug something into the right part of the office network)
The modern age (now-ish)
Computers (in all forms) and the software that runs on them are now an integral part of society, and testing is now an embedded part of the development of software.
Rather than chucking some (half baked) code over the wall to a team of testers, those testers work within the development team and get involved in the quality aspects of the software before it has even been written.
As a business analyst once remarked, “Testers ask difficult questions that no-one else thinks of”. By ‘left-shifting’, and getting testers involved before code is written – typically while the user story is being reviewed and acceptance criteria defined – allows problems to be found before they become part of the code.
Once the acceptance criteria have been agreed, test cases can be defined and these may be automated by either testers or members of the development team.
Testers can also be involved during the development of software – working with developers to provide fast feedback as the code is written.
Once complete, testers (or as is becoming more common, someone else in the team) can execute the tests and record results and report any defects.
In upcoming posts, I intend to explore in depth the different types of testing, when those test types would be used, and the techniques that can be applied to design the tests cases, and ultimately – how those test cases can be automated with some of the more popular tools in the industry.
One thought on “Testing in a team (then and now)”
I can relate to this: I started out in the same era. I worked on a data capture system for one of the utility regulators, and we used to distribute a data capture system to the companies we regulated. Testing was more about ensuring that the outcomes from that system were as accurate as possible. 100% accuracy was the requirement. I also had to test the connected systems for scraping data off into our database. Once that was done for the year, I switched roles and spent six months of the year as a data wrangler and liaison between our analysts the the regulated companies, arguing back-and-forth about data definitions, interpretation and analysis.
Oddly enough, though, we did do the shift left stuff; but that was seen as part of the work of defining the needs of our analysis staff rather than that being the requirements for the software. The organisation didn’t see that work as being “software testing” but as a work stream of its own. I suppose the data definitions we collected were part of the overall system, only one element of which was software. This is probably one reason why I tend to look at end-to-end processes and have adopted a whole-business approach to products
Comments are closed.