In the age-old waterfall model, discrete amounts of time and resources were allocated for quality testing after a product was stabilized and the majority of the development process was completed. As we adopt agile methodologies, the software development life cycle (SDLC) can make it challenging to fit quality testing and its expectations into the overall process. Additionally, today’s continuous improvement methodology—where development, DevOps and QA collaborate closely—calls for better “lines of demarcation” between the deliverables, expectations and outcomes expected of each role. Oftentimes, however, those lines are still blurred, which makes it critical to put a quality process in place that ensures efficiency, accountability and, ultimately, the delivery of a superior product to market.
Broadly speaking, we can categorize each role as follows:
- Development - provides individual and integrated components
- DevOps - provides all the infrastructure needed for integrating various components, deploying builds and initiating test suites on top
- QA - provides various test suites, its execution and respective reports
Let's consider the following release process, in an iterative model of continuous integration and development cycle:
Here, we are establishing two environments:
- goldenDev – an environment for active development (both dev and QA), integrating components, fixing all bugs, running test suites
- goldenQA – a more stable environment where extensive testing is done; only blocker issues are fixed here
Only when certain stability and quality is achieved in goldenQA is the go-ahead given to deploy to production.
Drilling down more on the quality process in this cycle, we can categorize various test suites as follows:
- Unit tests: usually defined by dev and triggered as part of each deployment
- Build Acceptance tests (BAT): defined by QA; includes basic sanity tests and is triggered after deployment to qualify a specific build
- Functional & System tests (FAST): defined by QA and triggered after BAT. Runs all functional and end-to-end tests (new features and regression).
- Performance & Stress tests (PAST): defined by QA and triggered after FAST. Runs benchmarking tests to establish the metric in terms of performance and stress on the system.
- Longevity tests (runs only on goldenQA): defined by QA. Runs use scenarios to gauge the stability of the overall product, over extended periods of time.
- Code Coverage: codebase can be instrumented, to check the code coverage being achieved with all test suites. These test suites can be continuously enhanced, based on the gaps identified to achieve more coverage.
As we move through different testing phases throughout the release cycle, we need a clear definition to quantify the overall quality of the product so a better judgement call can be made on the release decision. Based on the above test suites, the following criteria can give us good data points:
Release Exit Criteria
- All the planned test execution should be completed with the following criteria:
- 100% pass rate for unit and BAT tests
- 100% pass rate for regression tests (FAST)
- New feature run to plan (RTP) — 100% executed (FAST)
- New feature pass to plan (PTP) — 80–90% executed (FAST) and no critical and blocker bugs in open state
- Performance & stress testing (PAST) should be completed — with acceptable metrics (agreed based on previous baseline) and no degradation
- Longevity tests should be completed — system uptime of 7–10 days with no downtime and degradation
- Automation achieved
- Identified high priority tests should be automated
- Code coverage — 60–80% achieved
In conclusion, both product and test development efforts happen in parallel, while DevOps provides necessary machinery and each role focuses on their respective areas, churning out code and testing it thoroughly on the goldenDev environment. Only reliable and stable code is tested on goldenQA, with minimum development effort. Once the exit criteria are achieved, we should be ready to deploy the respective code base to production.