Software Testing Concepts and Methodologies








AUTHOR : RAJENDRA NARAYAN MAHAPATRA


Testing


Verifying the functionality and behavior of an application against the requirement specification is known as Testing.

OR

Execution of a program with an intention of finding defects is known as Testing.

7 Principles of Testing

1. Testing shows presence of defects.
2. Exhaustive Testing is impossible.
3. Defect Clustering i.e. A small number of modules contain most of the defects detected.
4. Pesticide Paradox i.e. To overcome this test cases need to be regularly reviewed and revised.Adding new or different test cases help to find more number of defects.
5. Testing is context dependent, The way you test an e-commerce website will be different from the way you test a commercial off the shelf application.
6. Early Testing
7. Absence of errors is a fallacy. i.e. finding and fixing defects don't help if the system build is unusable and doesn't fulfill the user needs or requirements.

SDLC (Software Development Life Cycle)


  • Waterfall Model
  • Spiral Model
  • V & V Model
  • Prototype Model
  • Agile



Waterfall Model


Requirement Analysis => Feasibility Study => Design =>  Coding => Testing => Implementation => Maintenance


Requirement Analysis

This is done Business Analyst or Onsite coordinator. He has to gather all the requirements about the project.


  • Impact Analysis document or Suitability Analysis document



Feasibility Study


  • This is done by a group of people like Architect, Finance Manager, HR and Project Manager.
  • Architect says about the technologies which are good for the project.
  • Finance Manager estimates the cost of the project.
  • HR says about the resources available for the project.
  • Project Manager has to know basic knowledge of these people.



Design

It is of two types

HLD (High Level Design or System Design)


  • It is done by system architect.
  • He says about the software in brief.
LLD (Low Level Design or Component Design)

  • It is done by Senior Developer.
  • He gives the details in terms of flow graphs.



















Coding & Testing

Coding is done by the developers and Testing is done by the test engineers.

Implementation

Implementation is done by the build  team. They train some people of the client side about the project.

Maintenance

In maintenance phase, project manager creates a team called as CCB (Change Control Board). They receive software change requests from the client.

Disadvantages

  • Here the requirements are fixed. We can't change the requirements.
  • Here test engineers have to sit idle until coding gets finished.

Applicable

For short term projects, we can adopt this Waterfall model.


SPIRAL MODEL


Spiral Model works in an iterative nature.
This model gives more emphasis on risk analysis.
Mostly large and complicated projects adopt spiral model.
Every iteration starts with the planning and ends with the product evaluation by the client.

Planning: Requirements gathering, Cost estimation and Resource allocation.
Risk Analysis: Strengths and weaknesses of the project.
Design: Development, Internal Testing and Code deployment.
Evaluation: Client Evaluation of the product to get the feedback.

Advantages:
It allows requirement changes.
Suitable for large and complicated projects.
It allows better risk analysis.
Cost effective due to good risk management.

Disadvantages:
Not suitable for small projects.
Success of the project depends on risk analysis phase.
Have to hire more experienced resource for risk analysis.

V & V Model (Verification & Validation)



It overcomes the disadvantages of waterfall model. In the waterfall model, we have seen that testers involve in the project at the last phase of the development process.
In this, testers involve early phases of SDLC. Testing starts in early stages of product development which avoids downward flow of defects which in turn reduces a lot of rework.
Both teams (development & testing) work in parallel. The testing team works in various activities like preparing test strategy, test plan and test case while the development team works on SRS, Design and Coding.

Deliverables are parallel in this model.
  1. Once client sends BRS/ CRS (Business Requirement Specification), both the teams start their activities. The developers translate BRS to SRS. The test team involves in reviewing BRS to find the missing or wrong requirements and writes acceptance test plan and acceptance test cases.
  2. In the next phase, Development team sends the SRS to the testing team for review and development team starts working on HLD of the product. The testing team involves in reviewing the SRS and writes system test plan and system test cases.
  3. In the next phase, development team starts working on LLD of the product. The testing team involves in reviewing the HLD and writes Integration test plan and integration test cases.
  4. In the next phase, development team starts with the coding of the product. The testing team involves in reviewing LLD and writes functional test plan and functional test cases.
  5. In the next phase, development team releases the build to the testing team once unit testing is done. The testing carries out functional testing, integration testing, system testing and acceptance testing on the released build step by step.

            Advantages:

As deliverables are parallel, it takes less time to complete the process.

Disadvantages:
Initial investment is more because testing team is involved from the early stage.

Prototype Model
Requirements -> Analysis -> prototype design & development -> prototype testing -> Customer review->designing->coding->testing->implementation->Maintenance
After collecting requirements from the client, development team designs and develops a dummy model. Testing team tests the dummy model. It will be sent to the customer for approval. Once the customer gives go ahead approval then we can start working on the original project.
If the client is new to the software and the developer is new to the domain then we can adopt this prototype model.

Agile Scrum Methodology


Sprint: In scrum, the project is divided in to sprints. Each sprint should have a specified time line (2 weeks to 1 month). This timeline will be agreed by the scrum team during the sprint planning meeting. Here, user stories are split in to different modules. End result of every sprint should be a potential shippable product.


The three most important aspects of the agile scrum methodology are


1> Roles     2> Artifacts          3> Meetings


Roles : 


Product Owner
===========
Product owner usually represents the client. He acts as a SPOC from the client side. He is the one who prioritizes the list of product backlog items which scrum team should finish and release.

Scrum Master
==========
Scrum master acts as a facilitator to the scrum development team. He clarifies the queries and organizes the team from the distractions and teaches the team how to use scrum and concentrates on ROI.

Scrum Development Team
===================
Developers and Testers who develop and test the product respectively. Scrum development team decides the effort estimation to complete a product backlog item.

Scrum Team
=========
It includes BA and Scrum Master as well. The team size should not exceed 12.


Artifacts :


User Stories
=========
User stories are not like traditional requirement documents. In  user stories, stake holders mention what features they need or what they want to achieve.

Product Backlog
============
Product Backlog is a repository where the product backlog items are stored and maintained by the product owner. The list of product backlog items are prioritized by the product owner as high, medium or low.

Sprint Backlog
===========
Group of user stories which scrum development team agree
 to work during the current sprint.

Product Burndown Chart
==================
A graph which shows how many product backlog items are completed / not completed.

Sprint Burndown Chart
=================
A graph which shows how many sprints are implemented / not implemented.

Release Burndown Chart
==================
A graph which shows the list of releases that are still pending which scrum team had planned.

Defect Burndown Chart
=================
A graph which shows the list of defects raised and fixed.


Meetings :


Sprint Planning Meeting
=================
The first step of scrum is sprint planning meeting which the entire team attends.
Here, the product owner selects the product backlog items from the product backlog.

Most important user stories are at the top of the list and least important are at the bottom.
Scrum team decides and provides the effort estimation to complete the product backlog items.

Daily Scrum Meeting (Daily Stand-up)
===========================
Here, each team member reports to the the peer team member on what he/she did yesterday, what he/she is going to do today and what obstacles are impending in their progress. This meeting should not exceed more than 15 minutes.

Sprint Review
==========
In the sprint review meeting, scrum development team presents a demonstration of a potential shippable product. Product Owner declares which items are completed and not completed. Product Owner may add additional items to the product backlog based on the stakeholders’ feedback.

Sprint Retrospective Meeting
=====================
Scrum team meets again after the sprint review meeting and documents the lessons learnt in the earlier sprint such as "What went well","What could be improved". It helps the scrum team avoid the mistakes in the next sprints.

Applicable
========
When the client is not clear on the requirements.
The client expects quick releases.
The client doesn't give all the requirements at a time.

Types of Testing

Black box Testing
Internal software design is not considered in this type of testing. Here tests are based on the requirements and functionality.

White box Testing
This testing is based on the knowledge of the logic of an application’s code. Also known as Greybox testing. Internal software and code working should be known for this type of testing. Here testes are based on the coverage of code statements, conditions, paths and branches. This is performed by the developer.

Functional Testing
In Functional Testing, we only check the functionality of the application is working as expected as per the functional requirement specification document or not.

Component Testing
In Component Testing, each and every component is tested rigorously.
Integration Testing
Testing the data flow or interface between two features or modules is known as Integration Testing.
It is of three types.
  1. Top down Integration Testing
  2. Bottom up Integration Testing
  3. Big Bang Integration Testing
Top down Integration Testing
An approach to the Integration Testing where the component at the top of the hierarchy is tested first with lower level components being simulated by stubs. Now the tested components are used to test the lower level components and this process is repeated until the component at the lowest level has been tested.
Stub
While integrating the modules in top down approach, if any mandatory module is missing then that is replaced with a temporary program known as Stub. It is called by the module under Test.
Bottom up Integration Testing
An approach to the Integration Testing where the component at the lowest level is tested first then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Driver
While integrating the modules in bottom up approach, if any mandatory module is missing then that is replaced with a temporary program known as Driver. It calls the module to be tested.
Big Bang Approach
An approach to the Integration Testing where all the components are combined and tested at one go is known as Big Bang approach.
System Testing
System testing is conducted on a complete integrated system to check the system's compliance with specified requirements. It is also called as End to end (E2E) testing. e.g.
  • User A gets logged in to the application using his credentials.
  • User A clicks on "Compose Mail" button.
  • User A gives valid email address of User B in the "To" section and gives valid subject line and valid body of the message
  • User A clicks on "Send" button.'
  • User A should be able to see the confirmation message.
  • User A gets logged out of the application.
  • User B gets logged in to the application.
  • User B should be able to see the message sent from User A at the top of the list.
  • User B gets logged out of the application.
User Acceptance Testing
Acceptance Testing is a level of s/w testing where a system is tested for acceptability. The pupose of this test is to evaluate the system's compliance with the business requirements and assess whether it is acceptable for delivery.
It is of two types
  • Alpha Testing
  • Beta Testing


Alpha Testing
1. Alpha Testing is conducted by a team of highly skilled testers at development environment / site.
2. In Alpha Testing, developers are present while testing so that they can record all the problems encountered during testing.
3. When the development of S/w is near to completion stage, Alpha Testing is conducted.

Beta Testing
1. Beta Testing is conducted either by the customer or by the end users at customer environment / site.
2. In Beta Testing, customer records all the issues encountered during testing and reports to the developer.
3. After passing Alpha Testing and before releasing the software, Beta Testing has to be conducted.

Defect Life Cycle



When a defect is raised in a defect tracking tool, the status becomes "New". After the status has to be changed to "Open". Once the defect is assigned to the Development Lead, the status becomes "Assigned". Development Lead may reject the defect due to three reasons

1> If it is a duplicate defect. Somebody has raised it before and again you are raising it.
2> Due to misunderstanding of requirements.
3> Due to referring old requirements.
or he may change the status as "Deferred" if the defect is minor and can be fixed in the next release. If it is a genuine defect then development lead assigns it to the concerned developer. The developer has to fix the defect and once it is fixed, the status becomes "Fixed". The developer changes the status to "Retest", once the defect is ready to be re-tested. If retest becomes successful then tester can close the defect or else he can reopen it and again it will be assigned to the development lead.

Re-Testing
Retesting is conducted to ensure that a particular defect has been fixed or not.
OR
Retesting is done to make sure that the test cases got failed in last execution are passing after the defects against those failures are fixed.

Regression Testing
Re-execution of the same test cases in different releases to make sure that any modification,addition or deletion of code will not affect the unchanged areas or features is known as Regression Testing.
OR
Regression testing is conducted to make sure that the unchanged or existing features are not impacted due to the bug fixes.

Smoke Testing
Smoke testing is nothing but validating the major and critical functionality of the application are working as expected or not. It is also called as BVT (/build Verification Test). The purpose is to reject a badly broken application so that QA team doesn't waste time in installing and testing the application.

Sanity Testing
Sanity Testing is a kind of software testing performed after receiving a build with minor changes in code or functionality to ascertain that the bugs have been fixed and no further issues are introduced due to these changes. The goal is to determine that the proposed functionality works roughly as expected. If sanity fails, the build is rejected to save time and costs involved in more rigorous testing.
The objective is not to verify thoroughly the new functionality, but to determine that the developer has applied some rationality (sanity) while producing the S/W.

Non Functional Testing
Non functional Testing is a type of testing to check the non functional aspects (performance, usability, reliability, scalability , portability etc.) of a software application. It is designed to test the readiness of a system as per the non functional parameters Which are never addressed in Functional Testing.

Performance Testing
Checking the stability and response time of an application by applying load is known as performance testing.

Load Testing
Checking the stability and response time of an application by applying load which is less than or equal to desired number of loads.

Stress Testing
Checking the stability and response time of an application by applying load which is greater than desired number of loads is known as stress testing.

Volume Testing
Checking the stability and response time of an application with bulk amount of data is known as volume testing.

Soak Testing
Checking the stability and response time of an application by applying load for a continuous period of time is known as soak Testing.

Accessibility Testing
Testing an application from physically challenged person's point of view i.e. whether a physically challenged person will be able to access the software application or not is known as accessibility testing.

Security Testing
Testing how well a system protects against unauthorized internal or external access or damage is known as security testing.

Localization (L10N) Testing
Localization testing is making required changes to the software of a particular region to suit natural, cultural or linguistic requirements of that region.

Internationalization (I18N) Testing
Internationalization Testing is making multiple versions of a software to suit natural, cultural and linguistic requirements of different regions in the world.

Usability Testing
Testing an application for user friendliness is known as usability Testing.

Defect
Defect is nothing but deviation of an actual result from an expected result.
OR
The variation between the expected result and actual result is known as Defect.

Bug
Bug is an informal name of a defect.

Error
We can't compile or run a program due to coding mistake then that is called as an error.

Failure
Once product is deployed and customer finds any issues then they call the product as a failure product. After release, if end users find any issue then that will be considered as a failure.

Severity
Severity can be defined as the impact of a defect on customer's business.In simple words, how much effect will be there on the system because of a particular defect.
It can be categorized as
1> Critical
2> Major
3> Minor
4> Trivial

Critical
A critical severity issue is an issue where a large piece of functionality or major system is completely broken and there is no workaround to move further.
Major
A major Severity issue is an issue where a large piece of functionality or a major system is completely broken but there is a workaround to move further.
Minor
A minor severity issue is an issue that imposes some loss of functionality but for which there is an acceptable & easily reproducible workaround.
e..g. Spelling Mistake, Font Family, Font Size, Font alignment, background color
Trivial
A trivial severity issue is an issue which is related to enhancement of the system.
Priority
Priority is nothing but importance given to fix a defect. It gives the order in which a defect should be resolved. Developers decide which defect they should take up next based on the priority. It can categorized as high,medium and low.
High
A high priority issue is an issue which has a high impact on customer's business or an issue which affects the system severely and the system can't be used until the issue is fixed.
Medium
Issues can be fixed and released in the next build come under medium priority. Such issues can be resolved with other development activities.
Low
An issue which has no impact on customer's business comes under low priority.
Test Strategy
Test Strategy identifies a best possible use of the available resources and time to achieve the required testing coverage or identified testing goals. It decides on which parts and aspects of the system the emphasis should fall. Test Strategy determination is based on a number of factors. A few of them are listed below
1> Product Technology
2> Component Selection
3> Product Criticality 
4> Product Complexity
Verification
-> The process of evaluating a system or component to check whether the product of the given development phase satisfies the conditions imposed at the start of that phase is called as verification.
-> Verification checks whether are we building the product right.
-> It involves activities like walkthroughs, inspections and reviews.
Validation
-> Determination of the correctness of the product of software development with respect to user needs or requirements is known as validation.
-> Validation checks whether are we building the right product.
-> It involves activities like whitebox testing and blackbox testing.

Difference between QA and QC ?

QA

-> QA is process oriented.
-> QA involves in
  • identifying the need of the process.
  • designing the process.
  • monitoring the process.
  • improving the process.

-> QA deals with defect prevention.

QC
-> QC is product oriented.
-> QC involves in verifying the functionality of an application against the requirement specification.
-> QC deals with defect detection.

CMM (Capability Maturity Model)
Capability Maturity Model is model for judging the maturity of the software processes in an organization and identifying the key practices those are required to increase the maturity of these processes. It has five levels

1> Initial
2> Repeatable
3> Defined
4> Managed
5> Optimization

Configuration Management
Configuration Management covers the processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who made the changes.









Comments