Sample Test Strategy for a MicroService Project with APIs only

 

 

 

 

 

<Project Name> Test Strategy


 Overview

<Project Overview To be added>

This is a backend micro services project. The project is developed using Java. The system will have integration with externals systems like common databases and application. The project will also build the APIs which will be used by other services i.e. we will also work as Provider.

Purpose

<Project Purpose>

In Scope: 

Following will be covered during the life cycle of the project:

Testing Types 

Sub-Type 

Owner 

Stage 

Environment 

Tool/Framework 

API Testing

PACT

QA

Sprint Testing

<TBU>

Rest Assured API Framework with PACT

API Functional

<TBU>

E2E Testing

Integration Testing

QA

RTL

<TBU>

Rest Assured API Framework

NFR

Performance Testing

QA

Before Go Live

<TBU>

Load-runner

Security Testing

TBC

Before Go Live

<TBU>

<TBU>

Production

Sanity Test

QA

After Go Live

Prod

Manual

 

Out of Scope:

Following are out of scope from QA purview:

  • Unit Test                          

             

Test Approach

The overall test approach will be in four phases, each of these phases will have their own approach and this is documented here in each section of the document.

Phase 1

Phase 2

Phase 3

Phase 4

Sprint Testing

End to End

Non Functional Test:

·         Security Testing

·         Performance Test

Production

API Test Validations

Integration Testing (API)

This is a sequential process however if needed we can run phase 1 & 2 or 2 & 3 in parallel due to the different environments

Test Phases

The test automation and execution approach is based on the "Test Pyramid" & "Agile Test Quadrants" model with continuous delivery and testing model.

 




 

 

 



Sprint Testing



Sprint Testing is embedded within the sprint and the sprint cannot complete until both the development and testing is complete. This means both teams work together to get the sprint into a ready for release position before moving to the next sprint.

Functional Testing



Unit Test (API)

Purpose: Developers write the unit tests to validate the business logic.
Approach: Code coverage, static code analysis etc.
Performed by: Developers
Entry Criteria: DOR (Definition of Ready)
Exit Criteria: Quality gate (code coverage) passed on the CI pipeline.

Consumer-Driven Contract Testing (API)

Purpose: <TBU>
Approach: <TBU>
Performed by: <TBU>
Entry Criteria: <TBU>

Exit Criteria: <TBU>

API Functional Testing

Purpose: The purpose is to test the validate API request and response permutation combinations. This covers all the Application API endpoints which Application is going to hit. 
All the responses of the APIs are handled gracefully.

Approach: All the stories have to be certified by QA on the Sandbox/Local environment to meet the DoD. Automation of all the API validation will be done and executed with every master/feature branch build.

Performed by: QA
Entry Criteria:

    • Exit criteria for Dev have been completed.
    • Stories are available for test and test scenarios defined.
    • Mocks are created if dependent APIs are not available. 
    • The automation framework is closed and ready to use.

Exit Criteria:

    • No P1 or P2 defect open.
    • All P3 and below open defects signed off by PO/BA.
    • 100% test scripts are executed with >95% pass rate.
    • Any automation script issues to be fixed and pushed to the master of the automation framework repository.

End to End Testing

An end-to-end test verifies that a system meets external requirements and achieves its goals, testing the entire system, from end to end. In order to achieve this, the system is treated as a black box and the tests exercise as much of the fully deployed system as possible. 

API: The scenarios will be executed which will validate the end to end functionality with external systems. The tests will be developed as part of the sprint but will be executed once the stories are ready for RTL i.e external system are available and integrated.

Performed by: QA


Entry Criteria

    • Environment where the individual micro-services can be integrated before they are deployed is available
    • The automation suite should be ready to be run.

Exit Criteria

    • No P1 or P2 defect open.
    • All P3 and below open defects signed off by PO/BA.
    • 100% test scripts are executed with >95% pass rate.

API Non-Functional Testing

API Performance Testing

Purpose
Performance testing will be carried on Performance environment (NFT) for cycle 1 & 2. NFT environment will be similar to the Production environment.

Tool

Load Runner

Approach

Define:

  • Identify test data requirements
  • Create API scripts in Load Testing Tool
  • Identify key metrics to monitor
  • Establish a reporting template

Execute: 

  • Execute the defined scripts in Performance environment
  • Baseline NFRs.
  • Pass/fail status post load test execution

Analysis:

  • Test results collation and prepare reports
  • Conduct cross-team test results reviews
  • Work with respective teams in order to help identify the root cause of issues
  • Post Tuning/optimisations retest the application.
  • Defect Triage
  • Post all the NFRs are met, prepare closure Report for Performance testing Sign-off

Process:

Single-User  Single Transaction Profiling (SUST) (Load Type: Single user )

Multi-User  Single Transaction Profiling (MUST) (Load Type: Multiple users)

Average Load Test ( Load: Average Users, Ramp up Time: TBU)

Peak Load Test ( Load: Peak Users, Ramp up Time: TBU)

Endurance Test ( Load: Peak & Average, Ramp up Time: TBU)

Stress Test ( Load: Incremental, Ramp up Time: TBU)

 

Performance In Scope Activities

  • Performance testing will be considered for APIs
  • Performance test scenarios and APIs automation using the Load Testing Tool.
  • 2 Cycles will be in scope and executions included in these cycles - SUST, MUST, Average, Peak Load test, stress test and Endurance Test, API Benchmark.
  • Service Side response time will be reported on Average and 90 percentile standard metrics.

Performance Out of Scope Activities

  • Performance testing execution on any environment other than NFT.
  • Performance testing and analysis on third party URLs will be out of scope through the URLs will be part of the test (not filtered)
  • Multi-Geo load locations.
  • Anything that is not covered in the in-scope activities.

 

Security Testing

Purpose
Security testing is a process that verifies that the information system protects the data and maintains its intended functionality. It involves an active analysis of the application for any weaknesses, technical flaws, or vulnerabilities. The primary purpose is to identify the vulnerabilities and subsequently repair them.

Approach: <TBD>

Process: <TBD>

Entry Criteria: TBD

Exit Criteria: TBD

Tools Stack

Module

Technology Stack

Testing Frameworks

APIs

Java

Cucumber with Java (Rest Assured) 

API Performance Testing

Performance Centre

Performance Centre

Test Process

Overarching Test Process: TBU 

Test Data Management

Test Data

Test data to run the Automation Test suite will be maintained in the common repositories in the form of JSON files. It helps the tester to modify the data in one place and reflects all across the automated test suite.
Additional test data will be identified based on the specific test scenarios which are outlined during refinement.

 

Test Data Clean-up/Setup: 

Access to the database will be granted to:

  • Clean and setup the test data 
  • Bring back the system in ready state.

Defect Management

During the sprint testing, all defects found that cannot be resolved during sprint will be logged in JIRA for tracking. All other defects detected during testing will be logged and reported using JIRA using the defect workflow.

Defect Workflow



Priority and Severity Ratings

Table of Issue Severity: TBU

Table of Priority: TBU

Reporting

Sprint Reporting

This will be done as part of Story DOD during the DOD review

Cucumber Reporting
BDD Execution Reports (HTML), depicting the below data is available after every execution:

Depicts the following:

    • PIE Charts showing the Features Pass/Fail %age & Count
    • PIE Charts showing Scenario Pass/Fail %age & Count
    • Features - Pass/Fail/Total Count
    • Scenarios - Pass/Fail/Total Count
    • Steps - Pass/Fail/Skipped/Pending/Undefined/Total Count
    • Duration for every Feature/Step
    • Feature/Scenario/Step Description
    • Error reason (in case of failure)
    • UI Snapshot (in case of UI Scenario)

Risks, Issues, Assumptions & Dependencies

Assumptions: 

  1. Mocks will be developed if the integrations are not available 
  2. E2E test cases will be executed in SIT & NFT environments
  3. Performance Testing will be done as part of the RTL in NFT.
  4. Access to appropriate system will be available. 

 Risks: TBU

Dependencies: TBU

Appendix A

Glossary of Terms

Acronym

Meaning

TBU

TBU

 


Comments

Popular posts from this blog

Test Batch Runner - Run QTP Scripts from xls file

Selenium: db and excel data into maps

Python: get entire workbook data cell by cell