Friday, March 26, 2010

Quality Assurance - QA Testing Interview Questions

What is testing?

Testing involves operation of a system or application under controlled conditions and evaluating the results. Making sure the Product developed as per the requirement.
To make sure the product meet the requirement, a test engineer must need to test each component +ve test & -ve test.
Test Normal and abnormal conditions
Quality Assurance (QA)

Customer Satisfaction was the buzzword of the 80's. Customer Delight (Something that gives great pleasure or enjoyment) is today's buzzword and Customer Ecstasy is the buzzword of the new millennium. Products that are not customer (user) friendly have no place in the market although they are engineered using the best technology. The interface of the product is as crucial as the internal technology of the product.
Test the product to make sure the product meets the end user requirement. Improving the performance, ensuring that problems are found and dealt with.
Software Development Life Cycle -- SDLC

Software life cycle begins when a software product is first conceived, and ends when it is no longer in use. It includes phases like: Initial concept, Requirements analysis, Functional design, Technical design, Coding, Unit test, Assembly Test, System test, Performance Test, Production Staging, Production Implementation, and Maintenance.

Initiation
Business need comes into the picture.
Concept document will be approved by the management
System Concept Development Stage
Find Resources (Developers, Testers, Leads, Mgrs etc)
Budget
what are the benefits
out come
fails?
Planning
what are the technologies?
Data collection
Requirement Analysis Phase
Business Analysts (BA) collect requirement from the client (or end user)

Write BRD (Business Requirement Document)

Implementing SRS – Software Requirement Specification

OR FRS (Functional Requirement Specification)

OR CRS (Component Requirement Specification)

SRS will be given to testers to get familiar with the product.

Design Phase
Data Modeling by Database architects (using Erwin Tools, UML diagrams
Functional Design, Technical Design
Walk through meeting.

Development Stage
Coding

Unit Testing

Assembly Testing

System Testing
Integration Test – Testing of the flow of the application

Functionality Test

Black Box Testing: Only concern with the input and output. Testing

Functionality of the Text box, Button etc.

White Box Testing: For developers. Test inside the logic

Regression Testing : Test -> if find a Bug(defect) -> Create defect in Defect tracking tool (Test Director)Defect will be assigned to Developer >Developer Fix >Again Test >if defect fixed , close the defect, if it is fixed but invented another bug because of this fix>reassigned to Developer.

Gress = Step --------- Regress = Restep
In regression test, Test cases from the Phase1 will be re tested in Phase2 enhancement. Re-testing of the application after bug fixing or enhancements, is to ensure that changes have not propagated unintended side effects and additional errors

Performance Test

Load Test – Load Runner – Response time

User Acceptance Test: Once the product is ready. Install the product at client location. The actual end user, test the product to make sure that the product meet their requirements

Maintenance

If the end user finds the bugs again, it goes to the test engineer to check the bug .Assign it to the developer to fix and test the fix, the fix will be applied to the end user product. This is called OGS – On Going support.
This cycle goes on till the product is no longer use.
Software Process Models

Linear Sequential Model/Waterfall Model
Prototyping Model
RAD Model(Rapid Application Development)
Evolutionary Software Process Model
Incremental Model
Spiral Model
Win Win Spiral Model
Concurrent Development Model
Component-Based Development Model
V-Model
Software Development Life Cycle

In our company, we are following the “Linear Sequential Model” or “Waterfall Model”. This model suggests a systematic approach to the software development that begins at

Analysis (SRS)
Design
Coding
Testing
Support

Analysis:

The requirement gathering process is focused specifically on software. To understand the nature of programs to be build. The software engineer must understand the information domain for the software, as well as required function, interface representation, behavior and performance. These requirements are documented and reviewed with the customer.

Design:

This design translates the requirements into the representation of the software that can be assessed for quality before coding begins. Design is actually a multistep process that focuses on four distinct attributes of a program. Data Structure, Software architecture, Interface representation and Procedural (algorithmic) details.

Coding:

The design must be translated into a machine readable form and this transformation is called coding. If design is performed in a detailed manner, coding can be done very easily.

Testing:

Once code has been generated, program testing begins. This testing process focuses on logical internals of the software ensuring that all statements have been tested to uncover errors and ensure that defined input will produce actual results that agree with the required results as stated in the SRS.

Support:

Software will undergo changes after it is delivered to the customer. Change will occur because errors have been raised; software must be adapted to accommodate changes in the external environment. Software support/maintenance reapplies each of the preceding phases to an existing program rather than a new one.

Software Process

SEI: Software Engineering Institute at Carnegie-Mellon University, initiated by the U.S. Defense Department to help improve software development process.

CMM: Capability Maturity Model, developed by the SEI. It’s a model of 5 levels of organizational maturity that determine the effectiveness in delivering quality software.

Level 1: Initial

The software process is characterized as ad-hoc and occasionally even chaotic. Success depends on individual effort.

Level 2: Repeatable

Basic Project Management Process is established to track cost, schedule and functionality. This process is to repeat earlier successes on projects with similar applications.

Key Process Areas (KPA):

Software Configuration Management

Software Quality Assurance

Software Subcontract Management

Software Project Tracking and Oversight

Software Project Planning

Requirements Management

Level 3: Defined

The software process for both management and engineering activities is documented, standardized, and integrated into an organization wide software process. All projects use a documented and approved version of the organization’s process for developing and supporting software. This level includes all characteristics defined for level 2.

Key Process Areas (KPA):

Peer Reviews

Intergroup Coordination

Software Product Engineering

Integrated Software Management

Training Program

Organization Process Definition

Organization Process Focus

Level 4: Managed

Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled using detailed measures. This level includes all characteristics defined for level 3.

Key Process Areas (KPA):

Software Quality Management

Quantitative Process Management

Level 5: Optimizing

Continuous process improvement is enabled by quantitative feedback from the process and from the testing innovative ideas and technologies. This level includes all characteristics defined for level 4.

Key Process Areas (KPA):

Process Change Management

Technology Change Management

Defect Prevention

ISO: International Organization for Standards.

IEEE: Institute of Electrical and Electronics Engineers.

ANSI: American National Standards Institute.

What is the importance of IEEE 829?

This standard is very useful for Software Testing. It explains various Test Documentations and gives formats for Test Plans.

What is testing?

Testing is the execution of a program under controlled conditions, with the intent of finding errors. These controlled conditions can be normal and abnormal. The benefit of testing is find out errors and get them corrected and makes the program run successfully with the functional and performance requirements as stated in the SRS document.

Kinds of Testing

Static Testing

Testing the application without executing the program. This can be done by code inspections, walkthroughs.

Dynamic Testing

Generate test data and executing the program.

Black Box Testing

Tests are based on functionality and requirements of the application. In this testing the functionality of the program is only considered and the input and output is properly accepted. It does not concentrate and how the program works internally to achieve the functionality.

(Or)

Test not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

White Box Testing

Tests are based on knowledge of internal logic of application code. Tests are based on coverage’s of code statements, branches, paths, and conditions.

Unit Testing

This is White box testing methodology, which more concentrates on internal logic of the program. This is done by developers to ensure that internal logic of the program is working as per the requirements.

Integration Testing

Testing of combined ‘parts’ of an application to determine, if they function together correctly. The ‘parts’ can be code modules, individual applications. To ensure that data interface between the modules/components are error free.

In integration testing, some modules may depend on other modules that are not available. In such cases, you may need to develop test drivers and test stubs.

Driver: A test driver simulates the part of the system that calls the component under test.

Stub: A test stub is a dummy component which simulates the behavior of real component.
System Testing

Testing the entire system (all modules are completed and integrated) for functionality and requirements of the client as stated in SRS.

Load Testing

Load testing evaluates system performance with pre-defined load level. Load testing measures how long it takes a system to perform various program tasks and functions under normal or pre-defined conditions.

Performance Testing

Performance test is done to determine system performance at various load levels. Performance testing is designed to test the run-time performance of software within the context of an integrated system.

Performance testing involves the evaluation of three primary elements:

System environment and available resources.
Workload
System Response time.
Measurements of Performance

Performance is measured by the Response time and Throughput.

Response Time: The amount of time the user must wait for a web system to react a request.

Throughput: The amount of data transmitted during client-server interactions. This is measured in kilo

bytes per second.

Stress Testing

Stress Testing is to check the behavior of the system that is pushed beyond their specified operational limits. Heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc...

Recovery Testing

Testing how well the system recovers from crashes, hardware failures and other unexpected software failures, network errors.

Security Testing

Testing how well the system protects against unauthorized internal or external access and willful damage.

Functionality Testing

This is Black box type testing geared to functional requirements of an application. This is done by testers.

(Or)

Concentrating specifically on functionality of the application. It is part of System testing.

Regression Testing

Re-testing of the application after bug fixing or enhancements, to ensure that changes have not propagated unintended side effects and additional errors.

Sanity Testing/Smoke Testing

This is the initial testing performed when the new build is evaluated in order to check out whether the build is ready for further or major testing.

User Interface Testing

This testing is done to check out the user interface or cosmetic errors. This testing is intended to the look and feel of the application as specified in the UI design document.

User Acceptance Testing

This is a system level testing generally done at the client’s environment. This is done with the realistic data of the client to find out errors. A series of acceptance tests are conducted to enable the customer to validate all requirements. Acceptance test can be done over a period of weeks or months.

(Or)

Final testing based on specifications of the end-user or customer, or based on use by end-user/customers over some limited period of time.

Alpha Test

Alpha test is conducted at the developer’s site by a customer to check the application meets the customer requirements.

(Or)

Alpha test is conducted at the developer’s site by a customer to validate all the requirements of the client.

Beta Test

Beta test is conducted at one or more customer sites by the end-user of the software to validate all the requirements of the client.

Ad-hoc Testing

This test is done randomly without executing any test cases with the intent of the finding errors.

Compatibility Testing

Testing how well the software performs in a particular hardware/software/network/operating system etc., environment.

(Or)

Compatibility testing determines if an application, under supported configurations, performs as expected with various combinations of hardware and software flavors and releases.

For example: compatibility testing would thereafter determine which manufactures and server brands, under the same configuration, are compatible with the web system.

Configuration testing

Configuration testing is designed to uncover errors related to various software and hardware combinations.

Configuration testing of web system involves the testing of various supported server software and hardware setups, browser settings, network connections, TCP/IP stack setups and so on.

For example: Configuration testing might validate that a certain web system installed on a dual-processor computer operates properly.

Usability Testing

Testing the application for user-friendliness. The effort required for learning and operating the application. Programmers and testers are usually not appropriate as usability testers. This can be done by the targeted end-users.

Scalability Testing

“Scalability testing is the ability of a system to handle an increased load without severely degrading its performance or reliability”. Web site scalability is defined by the difference in performance between a site handling a small number of clients and a site handling a larger number of clients, as well as the ability to maintain the same level of performance by simply adding resources to the installation.

There are many factors that affect scalability of a web application, including server hardware configuration, networking equipment and bandwidth, server operating system, volume and quantity of a back-end data, and so on.

Software Quality Assurance

Software QA involves the entire software development process, monitoring and improving the process, making sure that any agreed upon standards and procedures are followed. It is oriented to prevention. QA consists of the auditing and reporting functions of management.

Activities of QA:

Assist in monitoring the implementation of metrics process of the company.
Assist in performing function point analysis and estimate cost, schedule, and effort accordingly.
Assist in conducting reviews, audits and base lining of artifacts.
Implementing visual source safe (VSS) for configuration management.
Assist in taking training and orientation sessions on CMM for colleagues/team members in the Business Management System of the Organization.
Quality Control

Quality Control involves the series of inspections, reviews and tests used throughout the software development process to ensure each work product meets the requirements placed upon it. It is oriented to detection.

QC Activities:

Develop, implement and execute test methodologies and plans to ensure software product quality.
Developing test strategies, test plans and test cases to ensure quality based on user requirements document, high level and detailed designs.
Responsible for integration, system, acceptance and regression testing.
Responsible for test analysis reporting and follow up of findings through closure.
Creating test plans, test reports, and test manuals.
Design, set up and maintain test execution environment.
Software Quality

Quality software is reasonably bug free, delivered on time and within the budget, meets the requirements and is maintainable.

Measuring Quality

Correctness: A program must operate correctly. Correctness is the degree to which the software performs its required function. It is measured in defects per KLOC (Kilo Lines of Code).

Maintainability: Effort required to locate and fix an error in program, adapted if its environment changes.

Integrity: This measures systems ability to with stand attacks to its security. Attacks can be made on programs, data and documents.

Usability: Usability is an attempt to quantify user-friendliness.

Software Configuration Management

Software configuration management (SCM) is an umbrella activity that is applied throughout the software process. Configuration management is the art of identifying, organizing and controlling modifications to the software that is being built by the programming team. Because change can occur at any time, SCM activities are develop to Identify change, Control Change, Ensure that change is being properly implemented, Report changes to others who may have an interest. The goal is to maximize productivity by minimizing mistakes.

Good Code

Good Code is the code that works, bug free, readable and maintainable.

Good Design

‘Design’ could refer to many things, but often refers to ‘functional design’ or ‘internal design’. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable. For programs that have a user interface, it is often a good idea to assume that the end-user will have little computer knowledge and may not read a user manual or online help. Some common rules-of-thumb:

The program should act in a way that least surprises the user

It should always be evident to the user what can be done next and how to exit.

The program shouldn’t let the users do something stupid without warning them.

Verification

It refers to a set of activities that ensure that software correctly implements specific function.

This can be done with checklists, walkthroughs and inspection meetings.

Verification involves reviews and meetings to evaluate documents, plans, code, requirements and specifications.

Walk-Through

A Walk-Through is an informal meeting for evaluation or informational purposes.

No preparation is required.

Inspection

An Inspection is a more formalized meeting. It is typically a document such as requirement specification or test plan, and the purpose is to find out problems and see what is missing but not to fix anything. The result of the inspection meeting should be a written report.

Validation

It refers to the set of activities that ensure that software that has been built is traceable to customer requirements. Validation involves actual testing and takes place after verifications are completed.

Why does software have Bugs?

Software has bugs, due to

Miscommunication

Software Complexity

Programming errors

Changing requirements

Time Pressures

Poorly documented code

What is Test Data?

Test data is the minimal data required to execute a program under testing.

What is Test Plan?

A Test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. Its contents are scope, software/hardware requirements, and kinds of testing, effort, deliverables, automation, risk factors, exit criteria and assumption.

(Or)

What is the test plan and explain the contents of that?

“Test plan serves as the basis for all testing activities throughout the testing life cycle. Being an umbrella activity this should reflect the customer’s needs in terms of milestones to be met, the test approach (test strategy), resources required etc., the plan and strategy should give a clear visibility of the testing process to the customer at any point of time.”

Functional and performance test plans if developed separately will give more lucidity for functional and performance testing. Performance test plan is optional if the application does not entail any performance requirements.

What is Test Case?

A Test Case is a document that describes an input, action or event and an expected response to determine if a feature of an application is working correctly.

A Test case should contain particulars such as Test Case ID, Description, Test Data and Expected Results.

Test Case Design Techniques

What is Equivalence Partition?

In this approach, classes of inputs are categorized for product or function validation. This is usually does not include combinations of input but rather single state value based by class.

A single value in an equivalence partition is assumed to be representative of all other values in the partition.

The aim of equivalence partition is to select values that have equivalent processing; one can assume that if a test passes with the representative value then it should pass with all other values in same partition.

Ex:
Numeric values with negative, positive and 0 values

Strings those are empty or non–empty

List that are empty or non empty

Data files that exist or not, are readable/writable or not

What are Boundary Value Analysis and Equivalence Partition?

Equivalence Partitioning is based on the premise that inputs and outputs of a component can be partitioned into classes, which according to the component’s specifications, will be treated similarly by the component. This assumption is that similar inputs will evoke similar responses. A single value in an equivalence partition is assumed to be representative of all other values in the partition. This is used to reduce the problem that is not possible to test every input value. The aim of equivalence testing is to select values that have equivalent processing, One that we can assuming if a test passes with the representative value, it should pass with all other values in the same partition. That is we assume that similar some equivalence partitions may include combinations such as

Valid vs. Invalid input and output values

Numeric values with negative, positive and zero values.

Strings those are empty or non-empty

Lists those are empty or non-empty

Data files that exist are readable /writable or not.

Date years that are pre-2000 or post 2000, leap years or non-leap years (a special case is 29 February 2000 which is special processing of its own)

Dates that are in 28, 29, 30 or 31 day months

Days on workdays/outside office-hours

Type of data file, e.g. text, formatted data, graphics, video or sound

File source/destination, e.g. hard drive, floppy drive, CD-ROM, network

Equivalence partition is a set of test cases where the successful execution of any test case in the clause guarantees the success of all the test cases. In an equivalence partition for 1 set of correct test cases 2 sets of incorrect test cases are taken, in order to verify that the system behaves correctly for the correct test cases and wrongly for the incorrect test cases.

Boundary value analysis extends equivalence partitioning to include values around the edges of the partitions. As with equivalence partitioning, we assume that sets of values are treated similarly by components. However, developers are prone to making errors in the treatment of values on the boundaries of these partitions.

For example, elements of a list may be processed similarly, and they may be grouped into a single equivalence partition. However, in processing the elements, the developer may not have correct processing for either the first or last element of the list.

Boundary-values are usually the limits of the equivalence classes.

Example:

Monday and Sunday for weekdays

January and December for months

32767 and –32768 for 16-bit integers

Top-left and bottom-right cursor position on a screen

First line and last line of a printed report

1 January 2000 for two digit year processing

Strings of one character and maximum length strings

What is Review?

Reviews are to ensure that the work products under review meet the requirements.

What is Test Report?

A Test Report is a document that describes all the test cases that was conducted on the build by the testers. Its contents are Test Case ID, Description, Test Data, Expected Result, Actual Result, Status (pass/fail), Severity of the bug (H/M/L) and remarks.

What is Test Case Design?

A good test case should identify the undiscovered errors in testing.

We must design test cases that have the highest likelihood of finding the most errors with a minimum amount of time and effort.

Identify test cases for each module

Write test cases in each executable step

Design more functional test cases

Clearly identify the expected results for each test case

Design the test cases for workflow so that the test cases follow a sequence in the web application during testing. For example, for mail applications say yahoo, it has to start with a registration process for new users, then signing up, composing mail, sending mail etc.,

Security is high priority in web testing. Hence document enough test cases related to application security.

Develop trace ability matrix to understand the test case coverage with the requirements.

What is Use Case?

A use case is a scenario that describes how software is to be used in a given situation.

What is System Test Plan?

This document forms the basis for system testing. This plan clearly focuses the testing that will be conducted on the test bed environment. This document along with SRS, Detailed design document forms the basis for test case document.

Test Strategy (Or) Testing process in your company?

In our company, QC is involved from the project initiation meeting along with development team and the tester for each project is identified. All these members will participate in the knowledge transfers and discussions.

SRS and Design document are the inputs to the test plan. After the test plan is prepared, it will send to the project leader and team leads for review purpose. The review comments are incorporated into the rest plan.

Re-review is conducted if required. QA team conducts an audit and then the plan is base lined.

Now these, SRS, design and test plan documents will help in preparation of test cases. Our company-approved templates are used for test case documentation. A peer review of test cases is done within the department. These test cases are then sent for review to the development team to check for their sufficiency of test cases and the functionality of the application is fully covered in the test cases. Based on the review report sent by the technical team, the test cases are then updated/new cases are added and sent for re-review if required. Before test execution starts, the test bed environment will setup.

When a particular module is ready for testing, the development team will place the build in VSS and intimate it to the testing team. The tester will deploy the release on the test bed environment and start the execution of test cases. After the execution is completed, the tester will post the bugs into the defect manager tool.

Test report is prepared with the results of the tests that were conducted on the build. These are discussed with the development team and then they are published in the report and also updated if need.

Regression testing will be conducted on the application after the fixes are made to check if the fixes have not caused any adverse effects.

If completion criteria are satisfied, then all the test deliverables are submitted to the test lead. This is the way we do testing in our company.

Bug Life Cycle (or) Defect Tracking Flow Chart

Once a bug is found, testing team will assign it to the development team using automated tool called Defect Manager. While posting first time, the status of bug will be set as “OPEN” automatically.
After fixing the bug, Development team will re-assign it to the testing team (tester) by changing the status as “FIXED”.
During regression resting Testing team checks for the same bug, if that bug is fixed, they will close the bug by changing the status as “CLOSED” otherwise once again they will re-assign to the development team by changing the status as “RE-OPEN”, like that it works…
What is Severity & Priority of a Bug? What is the difference between them?

Severity indicates the impact of each defect on testing efforts or users and administrator of the application under test. This information is used by the developers and management as the basis for assigning priority of work on defects.

Critical (Show stopper): An item that prevents further testing of the product or function under test.

For example, an installation process which does not load a component, a general protection fault(GPF) or other situation which freezes the system(requiring a reboot) or missing menu option or security permission required to access a function under test.

High: An item that does not function as expected or designed.

For example, inaccurate calculations, the wrong fields being updated, the wrong rule, phrase or data being retrieved, an updated operation that fails to complete, slow system turn-around performance.

Medium: Annoyances that does not conform to standards and conventions.

For example, incorrect hot-key operation, an error condition which is not trapped, matching visual and text links which lead to different end points.

Low: Cosmetic issues which are not crucial to the operation of the system.

For example, Misspelled or ungrammatical text, inappropriate, inconsistent, incorrect formats such as text font, size, alignment, color, etc.

Classifications of the Bugs

Testing

Missing test cases: Test cases are not addressing all design aspects.
Ex: Test case to test the call back functionality is missing.

Inadequate/Incomplete Test cases: Test action pre-requisite is incomplete or inadequate.
Ex: Some necessary validations might not be addressed in the test cases. Inadequate test coverage.

Ambiguous Test cases: Test cases not clear to the re-viewer.
Ex: If the test case is ‘click the key’, not specifying which key to be pressed.

Deviation from standards: Test setup described is unrealistic or not adequate to conduct the test cases.
Editorial(Spelling/Grammar mistakes): Any alignment or spelling mistakes of the labels etc.,
Ex: ‘Assumptions’, instead of ‘Assumptions’

Incorrect Test cases: Test functionality and Test case not matching.
Incorrect Expected Result: If the expected result has been captured incorrectly in the test cases.
Ex: If the expected result is captured as ‘should display a message box with ok button’, is captured incorrectly as ‘should display an decision/query box with ‘Yes’ and ‘No’ buttons’.

Fields not properly addressed
Ex: Instead of capturing the fields as ‘ambiguous’, capturing it as ‘Not clear’.

Duplicate/Repetition of test cases: If the same test cases are repeated for different screens having the same parent screen.
Ex: If the screen differs based on the selection we make in the parent screen, capturing of the details in the parent screen for all the child screens.

Entry Criteria:

Test bed environment

Application under test

Base line test cases

Code review report

Unit test report

Unit test sign off sheet

Exit Criteria:

All the test cases should be executed on the test bed environment.

All the defects reported in the test report should be closed.

There should not be any High severity bugs and Medium severity bugs.

Low severity bugs should be tracked to closure.

Input Documents for Testing

SRS (Software Requirement Specifications)

Detailed Design

Test Plan

Test Case

Configuration Item

The item, which is eligible for configuration.

i.e., uploaded into configuration management.

Ex: VSS (Visual Source Safe)

Configurable Items in Testing

Test Plan

Test Cases

Review Report (Test Cases)

Test Report

What is Web Testing?

Web testing is testing of either internet or intranet web applications where the client interface is an internet browser. The browser can be anything like Internet explorer, Netscape navigator and Opera etc.

Approach:

Any testing process will start with the planning of test (Test Plan), building a strategy (how to test), preparation of test cases (what to test), execution of test cases (testing) and end in reporting the results (defects).

Practically it is not difference it is the priority areas which needs to be set for a web testing. For web testing the following few key focus areas like compatibility, navigation, user interaction, usability, performance, scalability, reliability and availability etc., can be considered during the testing phase.

What is the difference between Client Server Testing & Web Testing?

Client Server Testing Web Testing
Client Server transactions are limited Web transactions are unpredicted and unlimited
User behavior is predictable and controllable User behavior is predictable and controllable
System variables are LAN, Centralized H/W, S/W Firewalls, Routers, hosting company-caching systems.
Failures are notice internally It is known externally.
Software Metrics

“Metrics provides the information regarding the performance of a project /product.”
The data collected during different phases shall be used for managing the project and product development.
It can be used by the project/product development personals for estimation.
Collected metrics stored in the process/metric database.
Project Manager shall use the past project data to estimate size, effort, schedule, resource and cost for the current project as well as its QA activities.
Project Leader shall use data from similar projects and shall arrive at productivity factor in order to carry out the estimation.
Measurements are captured at each end of phase.

Inputs for Metrics

Timesheet
Project Management Plan
Project status Report
Project Closure Report
Review Report
Test Case
Test Report
Training Plan
Training Feedback Form
User Requirement Document
Internal Quality Audit
Audit Calendar
Configuration Status According
Metrics Plan

Metrics plan describes Organization’s goals for project/product quality, productivity, product development cycle schedule, effort, Cost, Size, Measurement, Project Standard Software process, Data capturing method, Data Control Points etc.

Testing Metrics
Defect Density

Formula: (Total No. Of Defects)/size*100

Units: In Nos. (%)

Periodicity to calculate: Project Closure

Residual Defect Density

Formula: No. of defects found after system testing /Size (KLOC)

Units: In Nos. (%)

Periodicity to calculate: Project Closure

Effort Variance

Formula: 100*(Actual Person-hours expended - Estimated hours)/ Estimated person-hours

Units: In Nos. (%)

Periodicity to calculate: Phase-wise

Size Variance

Formula: 100*(Actual Size- Estimated Size)/ Estimated Size

Units: FP (Functional Points) or KLOC (Kilo Lines of Code)

Periodicity to calculate: End of every phase & project closure
Schedule Variance

Formula: 100*(Actual Elapsed Duration- Estimated Duration)/ Estimated Duration

Units: Working days

Periodicity to calculate: Phase-Wise

Productivity

Formula: Size/Effort

Units: FP/Person days (days=working days)

Periodicity to calculate: Project Closure

Testing Tips

Testing is the process of identifying defects, where a “defect” is any variance between actual and expected results.

Editable fields checking and validation:

Valid/invalid characters/strings, data in all editable fields
Valid minimum/maximum/mid range values in fields
Null strings(no data) in required fields
Record length(character limit) in text/memo fields
Cut/Copy/Paste into/from fields when possible
Non-editable fields checking:

Check for all test/spelling in warnings and error messages/dialogs
Invoke/check all menu items and their options
Application Usability

Appearance of outlook(placement and alignment of objects on screen)
User interface test(open all menus, check all items)
Basic functionality checking(file + open + save, etc.,)
Right mouse clicking sensitivity
Resize/min/max/restore app, windows(check minimum application size)
Scroll ability when applicable(scroll bars, keyboard, auto scrolling)
Keyboard and mouse navigation, highlighting, dragging, drag/drop
Print in landscape an portrait modes
Check F1, What’s This, Help menu
Short-cut and accelerator keys
Tab key order and Navigation in all dialog boxes and menus
Basic Compatibility:

16 bit OS (win 3.x, win95, OS/2, want 3.x)

32 bit OS (win95/98/2000/NT) UNIX

What is the base line for Performance Testing?

Functional and performance document or the SRS document.

What is the Load Balance of the application?

Balancing the load with different servers.

Ex: There is a server of capacity of handling 100 users at a time. If the no. of users exceeds the limit of 100 users, then they will automatically transfer to another server to balance the load.

What is your Bug clearance ratio?

(Ratio between the valid and invalid bugs)

96:100
How many Test cases u can prepare in a day?

60-70 Test cases

How many Test Cases you can Execute in a day?

70-90 Test cases

Which tool are you using for Configuration Management?

Visual Source Safe (VSS) from Microsoft

What is Six Sigma?

Six Sigma is a customer-focused philosophy for improving quality that was first developed by Motorola in 1980’s.

What is Process & Procedure?

Process is tested with the intent to improve it. Process is defined as the execution. (Or) Process is flow tasks.

Procedure is tested with the intent to increase its quality. Procedure is one, which is designed to perform an intended functionality. How you are implementing the process is called procedure.

How do you rate yourself in Testing?

7-8
We do testing of the application against to the customer requirements. How do you proceed, if there are no requirements?

Based on target users.

What is the best tester to developer ratio?

It should be nearly tester/developer=nearly 40%

Some companies have a 1:1 ratio (at one point, Microsoft’s OS development team was an example of that -- in fact they also had an integration team, which meant that there were more testers than developers). There are companies where they have only one or two integration testers and all the developers are required to run a suite of integration tests before they check their code to make sure that they have not introduced defects. In these environments, the ratio can be 1:25. Some locations even have no SQA team or department at all, so the ratio is 0:1.

Which approach of integration testing are you following?

Top – down

Have you prepared test plan?

No, but involved in test plan.

What makes a good test engineer?

A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process gives the tester an appreciation for the developer’s point of view and reduces the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.

What if there isn’t enough time for thorough testing?

Use risk analysis to determine where testing should be focused.

Which functionality is most important to the project’s intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest financial impact on users?
What kinds of problems would cause the worst publicity?
What kinds of tests could easily cover multiple functionalities?

What can be done if requirements are changing continuously?

A common problem and a major headache

Work with the project’s stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible
It is helpful if the application’s initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
If the code is well commented and well documented this makes changes easier for the developers.
Negotiate to allow only easily implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
Focus less on detailed test plans and test cases and more on ad-hoc testing.
What is the difference between ISO & CMM?

ISO 9000 Standards SEI-CMM
Generic Standard Maturity Model
Applicable for all kinds of Organizations Applicable only to Software Organizations
Contain 20 Clauses Contain 18 “Key Process Area’ (KPAs)
Documentation called Qs Manual Documentation called work products and Artifacts
Certification Audit is like an Examination Final assessment is Collaborative
Certification is a pass/fail outcome The result of the assessment is a quantitative score of the maturity of software development process
What kinds of Testing you perform?
Explain your Quality Effort in your company?
Describe Quality as you understand it?
What is the difference between Integration Testing & System Testing?
What documents would you need for QA?
Explain your involvement in the Test Plan for your project?
What is the difference between Management system and Quality?
What is Conformance & Non Conformance?
What is Corrective Action & Preventive Action?
What is Quality Monitoring & Quality Measurement?
What is Audit?
How do you rate yourself in achieving deadline?