Last September the Exact Online Development Team deployed a new release of the product in Holland, Belgium and Turkey. Before this could happen, an intensive period of investigation and testing was essential – the Release Acceptance Test.
During such a phase we execute hundreds of test scripts (scenarios that re-create how people use the product), locating bugs in the new software that cause problems. These then have to be fixed before the update can be released to our customers.
One of our biggest challenges is accurately estimating the time needed for this. We need to make sure that we give ourselves enough time to do everything ahead of the update deadline. Three problems are particularly influential:
- Not all the planned testers can start testing on day one, many still being involved with other work. This means that some bugs are found late in the process, and fixing them causes delays.
- It’s also very difficult to accurately predict how long it will take to fix the problems we find and have that test script signed off.
- The number of test scripts grows for each release as functionality expands.
So, how do we ensure the Release Acceptance Test finishes on time and doesn’t affect the rest of the deployment process?
As with Exact’s other product lines, Exact Online now use the SCRUM project management and agile software development framework. During several months of getting to grips with it, we wondered if it might also be useful for managing the Release Acceptance Test…
What did we do?
We introduced SCRUM’s principles in several ways. Firstly, we divided the one month test period into two equal ‘sprints’:
- Sprint 1 Goal: Testers test all planned test scripts (‘The Test Backlog’). This means that every script is tested at least once and all discovered bugs are registered.
- Sprint 2 Goal: All blocking bugs are fixed and re-tested. Only when all bugs related to a test script have been approved can it be signed off as done.
The testers then estimated how long each test script would take to complete (including writing new test cases when necessary). We also defined SCRUM test teams (STT’s), grouping test cases by similar functional areas.
With test times known per team and agreed by those involved, it was much easier to allocate testing resources specifically and accurately. Each team could also receive overcapacity of a few days, a handy buffer to solve any problems that might arise.
Team Leaders (comparable with SCRUM Masters)then organized daily stand-up meetings where each member reported progress, planning and impediments (if any). They also updated the Burn Down Chart daily to highlight the sprint’s progress.
What was the result of this?
The multi-disciplinary teams of quality engineers, functional designers and software engineers were so highly focused that they achieved Sprint 1’s goal (testing all planned test scripts) within the two weeks. This previously took as many as four! The daily updated burn down chart also made it clear how each team was doing – on schedule, going faster or going slower. This made it easy to lend an extra hand where it was needed.
And it’s influence on quality? As this largely depends on meeting and exceeding our customers’ expectations, it’s a difficult question to answer. What is clear is that we’ve never achieved such focus in our testing before. We also knew the number of bugs that needed attention, and which areas they were in much earlier. That alone contributes to higher quality through making better planning for the fixing process possible.
I’ll let you know how SCRUM influenced the fixing in part 2 in a few weeks! In the meantime, if you have any questions about how we make our new versions available to you, I’d be very happy to hear them.