Quality Assurance Interview Questions | Eklavya Online

Quality Assurance Interview Questions

Adhoc testing is an informal way of testing the software. It does not follow the formal process like requirement documents, test plan, test cases, etc.

Characteristics of adhoc testing are:

  • Adhoc testing is performed after the completion of formal testing on an application.
  • The main aim of adhoc testing is to break the application without following any process.
  • The testers who are executing the adhoc testing should have a deep knowledge of a product.

Preventive Approach: It is also known as the Verification process. Preventive is the approach to prevent defects. In this approach, tests are designed in its early stages of Software Development Lifecycle before the software has developed. In this approach, testers try to prevent defects in the early stages; it comes under Quality Analysis.

Reactive Approach: It is also known as Validation Process. This approach is to identify defects. In this approach, tests are designed to execute after the software’s development. In this approach, we try to find out the defects. It comes under Quality Control.

QA stands Quality Assurance. QA is a set of activities designed to ensure that the developed software meets all the specifications or requirements mentioned in the SRS document.

QA follows the :

Plan
The plan is a phase in Quality Assurance in which the organization determines the processes which are required to build a high-quality software product.

Do
Do is a phase of development and testing the processes.

Check
This phase is used for monitoring the processes and verifies whether these processes meet the user requirements or not.

Act
The Act is a phase for implementing the actions required to improve the processes.

Traceability matrix is a document that maps and traces user requirements with test cases. The main aim of Requirement Traceability Matrix is to see that all test cases are covered so that no functionality miss during the software testing.

Both monkey testing and adhoc testing follows the informal approach, but in monkey testing, we do not need to have deep knowledge of the software. However, to perform adhoc testing, testers should have a deep knowledge of the software.

An Audit is defined as on-site verification activity, such as inspection or examination, of a processor quality system. Quality Audit is the process of systematic analysis of a quality system carried out by an internal or external quality auditor, or an audit team. Quality Audits are performed at predefined time intervals and ensure that the institution has clearly defined internal system monitoring procedures linked to effective action. Audits are an essential management tool to be used for verifying objective evidence of processes.

Build is defined as when the software is given to the testing team by the development team.

Release It is defined as when the software is handed over to the users by the tester and developer.

There are four different levels in software testing:

  • Unit/Component testing
  • Integration testing
  • System testing
  • Acceptance testing
    Unit testing
  • It is the lowest level in most of the models.
  • Units are the programs or modules in the software.

Unit testing is performed by the programmer that tests the modules, and if any bug is found, then it is fixed instantaneously.
Integration testing

  • Integration means the combination of all the modules, and all these modules are tested as a group.
  • Integration testing performs the testing on the data that flows from one module to another module.
  • It basically checks the communication between two or more modules but not the functionality of individual modules.

System testing

  • System testing is used to test the complete or integrated system.
  • It tests the software to ensure that it conforms the specified requirements specified in the SRS document.
  • It is the final test and performs both functional and non-functional testing.
    Acceptance testing
  • Acceptance testing is performed by the users or customers to check whether it meets their requirements or not.

The Test Plan document is a document which contains the plan for all the testing activities to deliver a quality product. The test Plan document is derived from many activities such as product description, SRS, or Use Case documents for all future events of the project. The Test Lead usually prepares it, or Test manager and the focus of the document is to describe what to test, how to test when to test, who will do what test.

Bug leakage is defined as the bug not found by the testing team but found by the end users. Bug release it is defined when the software is released by the tester in the market knowing that bug is present in the release. These types of bugs have low priority and severity. This type of situation arises when customers want the software on time than the delay in getting the software and the cost involved in correcting the bugs.

QA stands for Quality Assurance. QA team persuades the quality by monitoring the whole development process. QA tracks the outcome and adjusting processes to meet the expectation.

Role of Quality Assurance are:

  • QA team is responsible for monitoring the process to be carried out for development.
    Responsibilities of the QA team are planning, testing, execution process.
  • QA Lead creates the time table and agrees on a Quality Assurance plan for the product.
  • QA team communicated the QA process to the team members.
  • QA team ensures traceability of test cases to requirements.

The bug life cycle is also known as the defect life cycle. Bug life cycle is a specific set of states that a bug goes through. The number of states that a defect goes through varies from project to project.

New
When a new defect is logged and posted for the first time, then the status is assigned as New.

Assigned
Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to the developing team.

Open
The developer starts analyzing and works on the defect fix.

Fixed
When a developer makes a necessary code changes and verifies the change, then he/she can make the bug status as fixed.

Retest
Tester does the retesting of the code at this stage to check whether the defect is fixed by the developer or not and change the status to retest.

Reopen
If the bug persists even after the developer has fixed the bug, then tester changes the status to Reopen and once again bug goes through the bug life cycle.

Verified
The tester retests the bug after it got fixed by the developer if no bug found then it changes the status to Verified.

Closed
If the bug is no longer exists, then it changes the status to Closed.

Duplicate
If the defect is repeated twice or the defect corresponds to the same concept of the previous bug, then it changes the status to Duplicate.

Rejected
If the developer feels that the defect is not a genuine defect, then it changes the status to Rejected.

Deferred
If the bug is not of higher priority and can be solved in the next release, then the status changes to Deferred.

This is one of the most crucial questions. As a project manager or project lead, sometimes we might face a situation to call off the testing to release the product early. In those cases, we have to decide whether the testers have tested the product enough or not.

There are many factors involved in real-time projects to decide when to stop testing:

  • If we reach Testing deadlines or release deadlines
  • By entering the decided pass percentage of test cases.
  • In the real-time project, if the risk in the project is under the acceptable limit.
  • If all the high priority bugs and blockers have been fixed.
  • If we meet the acceptance criteria.

There are five different solutions for the software development problem.

  • The requirements for software development should be clear, complete, and agreed by all, setting up the requirements criteria.
  • Next thing is the realistic schedule like time for planning, designing, testing, fixing bugs, and re-testing.
  • It requires sufficient testing, starts the testing immediately after one or more module development.
  • Use of group communication tools.
  • Use rapid prototype during the design phase so that it can be easy for the customer to find what to expect.

The dimensions of the risk are:

  • Schedule: Unrealistic Schedules, to develop a huge software in a single day.
  • Client: Ambiguous requirements definition, requirements are not clear, changes in requirement.
  • Human Resource: Non – availability of sufficient resources with the skill level expected in the project.
  • System Resources: Non-availability of acquiring all critical resources, either hardware and software tools or license for software will have an adverse effect.
  • Quality: Compound factors like lack of resources along with a tight delivery schedule and frequent changes to the requirement will affect the quality of the product tested.

There are mainly two techniques to design the test cases:

Black box testing

  • It is a specification-based technique where the testers view the software as a black box with inputs and outputs.
  • In black box testing, the testers do not know about how the software is structured inside the box, they know only what the software does but do not know how the software does.
  • This type of technique is valid for all the levels of testing where the specification exists.

White box testing

  • White box testing is a testing technique that evaluates the internal logic and structure of the code.
  • In order to impement the white box testing, the testers should have the knowledge of coding so that they can deal with the internal code. They look into the internal code and finds out the unit which is malfunctioning.

The following are the types of documents in Software Quality Assurance:

Requirement Document
All the functionalities are to be added in the application are documented in terms of Requirements, and the document is known as Requirement document. This Requirement document is made by the collaboration of various people in the project team like developers, testers, Business Analysts, etc.

Test Metrics
Test Metrics is a quantitative measure that determines the quality and effectiveness of the testing process.

Test plan
It defines the strategy which will be applied to test an application, the resources that will be used, the test environment in which testing will be performed, and scheduling of test activities will be done.

Test cases
A test case is a set of steps, and conditions used at the time of testing. This activity is performed to verify whether all the functionalities of software are working properly or not. There can be various types of test cases such as logical, functional, error, negative test cases, physical test cases, UI test cases, etc.

Traceability matrix
Traceability matrix is a table that traces and maps the user requirements with test cases. The main aim of Requirement Traceability Matrix is to see that all test cases are covered so that no functionality miss during the software testing.

Test scenario
A test scenario is a collection set of test cases which helps the testing team to determine the positive and negative aspects of a project.

Test ware is a term used to describe all the materials used to perform the test. Test ware includes test plans, test cases, test data, and any other items needed to perform and design a test.

In Test Driven Development, test cases are prepared before writing the actual code. It means you have to write the test case before the real development of the application.

Test Driven Development follows:

  • Write the test cases
  • Execute the test cases
  • If the test case fails, then changes are made to make it correct
  • Repeat the process
  • Monkey testing is a type of black box testing used to test the application by providing random inputs to check the system behavior such as to check the system, whether it is crashing or not.
  • This type of testing is performed automatically whenever the user provides the random inputs to check the system behavior
    There is no need to create test cases to perform monkey testing.
  • It can also be automated, i.e., we can write the programs or scripts to generate random inputs to check the system behavior.
    This technique is useful when we are performing stress or load testing.
    There are two types of monkeys:
  • Smart monkeys
  • Dumb monkeys
    Smart Monkeys
  • Smart monkeys are those which have a brief idea about the application.
  • They know that where the pages of an application will redirect to which page.
  • They also know that the inputs that they are providing are valid or invalid.
  • If they find any error, then they are smart enough to file a bug.
  • They also know that what are the menus and buttons.
    Dumb Monkeys
  • Dumb Monkeys are those which have no idea about the application.
  • They do not know about the pages of an application will redirect to.
  • They provide random inputs, and they do not know about the starting and ending point of the application.
  • They do not know much about the application, but still, they find bugs such as environmental failure or hardware failure.
  • They also do not know much about the functionality and UI of an application.