Thursday, March 17, 2011

QA - Quality Assurance and Software Testing

What is quality assurance? Definitions and types of sw testing









What is Quality Assurance ?


Quality Assurance makes sure the project will be completed based on the previously agreed specifications, standards and functionality required without defects and possible problems. It monitors and tries to improve the development process from the beginning of the project to ensure this. It is oriented to "prevention".

When should QA testing start in a project ? Why?

QA is involved in the project from the beginning. This helps the teams communicate and understand the problems and concerns, also gives time to set up the testing environment and configuration. On the other hand, actual testing starts after the test plans are written, reviewed and approved based on the design documentation.

What is Software Testing ?

Software testing is oriented to "detection". It's examining a system or an application under controlled conditions. It's intentionally making things go wrong when they should not and things happen when they should not.

What is Software Quality ?

Quality software is reasonably bug free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.

What is Software Verification and Validation ?

Verification is preventing mechanism to detect possible failures before the testing begin. It involves reviews, meetings, evaluating documents, plans, code, inspections, specifications etc. Validation occurs after verification and it's the actual testing to find defects against the functionality or the specifications.

What is Test Plan ?

Test Plan is a document that describes the objectives, scope, approach, and focus of a software testing effort.

What is Test Case ?

A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

What is Good Software Coding ?

Good code is code that works according to the requirements, bug free, readable, expandable in the future and easily maintainable.

What is a Good Design ?

In good design, the overall structure is clear, understandable, easily modifiable, and maintainable. Works correctly when implemented and functionality can be traced back to customer and end user requirements.

Who is a Good Test Engineer ?

Good test engineer has the ability to think the unthinkable, has the test to break attitute, strong desire to quality and attention to detail.

What is Walkthrough ?

Walkthrough is quick and informal meeting for evaluation purposes.

What is Software Life Cycle ?

The Software Life Cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

What is Software Inspection ?

The purpose of inspection is trying to find defects and problems mostly in documents such as test plans, specifications, test cases, coding etc. It helps to find the problems and report it but not to fix it. It is one of the most cost effective methods of software quality. Many people can join the inspections but normally one moderator, one reader and one note taker are mandatory.

What are the benefits of Automated Testing ?

It's very valuable for long term and on going projects. You can automize some or all of the tests which needs to be run from time to time repeatedly or diffucult to test manually. It saves time and effort, also makes testing possible out of working hours and nights. They can be used by different people and many times in the future. By this way, you also standardize the testing process and you can depend on the results.

What do you imagine are the main problems of working in a geographically distributed team ?

The main problem is the communication. To know the team members, sharing as much information as possible whenever you need is very valuable to solve the problems and concerns. On the other hand, increasing the wired communication as much as possible, seting up meetings help to reduce the miscommunication problems.

What are the common problems in Software Development Process ?

Poor requirements, unrealistic schedule, inadequate testing, miscommunication and additional requirement changes after development begin.





What are Software Testing Types ?


* Black box testing : You don't need to know the internal design in detail or have a good knowledge about the code for this test. It's mainly based on functionality and specifications, requirements.
* White box testing : This test is based on detailed knowledged of the internal design and code. Tests are performed for specific code statements and coding styles.
* Unit testing : The most micro scale of testing to test specific functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.
* Incremental integration testing : Continuous testing of an application as new functionality is added. Requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed. Done by programmers or by testers.
* Integration testing : Testing of combined parts of an application to determine if they function together correctly. It can be any type of application which has several independent sub applications, modules.
* Functional testing : Black box type testing to test the functional requirements of an application. Typically done by software testers but software programmers should also check if their code works before releasing it.
* System testing : Black box type testing that is based on overall requirements specifications. Covers all combined parts of a system.
* End to End testing : It's similar to system testing. Involves testing of a complete application environment similar to real world use. May require interacting with a database, using network communications, or interacting with other hardware, applications, or systems.
* Sanity testing or smoke testing : An initial testing effort to determine if a new sw version is performing well enough to start for a major software testing. For example, if the new software is crashing frequently or corrupting databases then it is not a good idea to start testing before all these problems are solved first.
* Regression testing : Re-testing after software is updated to fix some problems. The challenge might be to determine what needs to be tested, and all the interactions of the functions, especially near the end of the sofware cycle. Automated testing can be useful for this type of testing.
* Acceptance testing : This is the final testing done based on the agrements with the customer.
* Load / stress / performance testing : Testing an application under heavy loads. Such as simulating a very heavy traffic condition in a voice or data network, or a web site to determine at what point the system start causing problems or fails.
* Usability testing : Testing to determine how user friendly the application is. It depends on the end user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
* Install / Uninstall testing : Testing of full, partial, or upgrade install / uninstall processes.
* Recovery / failover testing : Testing to determine how well a system recovers from crashes, failures, or other major problems.
* Security testing : Testing to determine how well the system protects itself against unauthorized internal or external access and intentional damage. May require sophisticated testing techniques.
* Compatability testing : Testing how well software performs in different environments. Particular hardware, software, operating system, network environment etc. Like testing a web site in different browsers and browser versions.
* Exploratory testing : Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
* Ad-hoc testing : Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
* Context driven testing : Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life critical medical equipment software would be completely different than that for a low cost computer game.
* Comparison testing : Comparing software weaknesses and strengths to competing products.
* Alpha testing : Testing of an application when development is nearing completion. Minor design changes may still be made as a result of such testing. Typically done by end users or others, not by programmers or testers.
* Beta testing : Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end users or others, not by programmers or testers.
* Mutation testing : A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (defects) and retesting with the original test data/cases to determine if the defects are detected. Proper implementation requires large computational resources.

Thursday, February 24, 2011

Actual interview questions

Here are some of the actual interview questions for a QA tester and how the person answered them. I thought this will give us some idea on what kind of questions to expect and how to answer them.

1. Can you tell me about yourself?

Answer: In my QA career, I have been working on various system platforms and operating systems like Windows 95, Windows 2000, Windows XP and UNIX. I have tested applications developed in Java, C++, Visual Basic and so on. I have tested Web-based applications as well as client server applications.
As a QA person, I have written Test Plans, Test Cases, attended walkthrough meetings with the Business Analysts, Project Managers, Business Managers and QA Leads. I have attended requirement review meetings and provided feedback to the Business Analysts. I have worked in different databases like Oracle and DB2, wrote SQL queries to retrieve data from the database. As far as different types of testing is concerned, I have performed Smoke Testing, Functional Testing, Backend Testing, Black Box Testing, Integration Testing, Regression Testing and UAT (User Acceptance Testing) Testing. I have participated in Load Testing and Stress Testing.
I have written defects as they are found using ClearQuest and TestDirector. Once the defects were fixed, retested them and if the passed, closed them. If the defects were not fixed, then reopened them. I have also attended the defect assessment meetings as necessary.
In the meantime, a continuous interaction with developers was necessary.
This is pretty much what I have been doing as a QA person.

2. What did you do in your last project?

Answer: In my last project, the application was a web-based application developed in Java platform. As a QA Person, I wrote Test Plans from the requirement documents and Use Cases. I performed Smoke Testing, Functional Testing, Backend Testing, Black Box Testing, Integration Testing, Regression Testing and UAT (User Acceptance Testing). I have participated in Load Testing and Stress Testing. I attended several walkthrough meetings for requirement reviews and provided feedback to the Business Analysts. Mostly, I was in the backend testing, which required writing SQL queries directly to the database.
Besides these, I wrote defects using ClearQuest. Once the defects were fixed, retested them and if the passed, closed them. If the defects were not fixed, then reopened them.

3. Have you written Test Plan? What is a Test Plan? What does it include?
Answer: Yes.

What is a Test Plan?

Answer: A Test Plan is a document that describes the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks and who will do each task (roles and responsibilities) and any risks and its solutions.


What does it include?

Answer: A Test Plan includes Heading, Revision History, Table of Contents, Introduction, Scope, Approach, Overview, different types of testing that will be carried out, what software and hardware will be required, issues, risks, assumptions and sign off section.

4. Have you written Test Cases? Answer: Yes.
What is a Test Case? What does it include?

Answer: A Test Case is a document that describes step-by-step process how to test the application. A Test Case includes Test Case ID, Steps Description, Expected Output, Actual Output, Pass/Fail, and Remarks. (Remember, this is NOT a part of Test Plan. It is a separate document written using Excel. In some companies, they use Rational TestManager or TestDirector. But for companies, who do not have these tools, use Excel sheet. In t he example below, it is in the Excel sheet)




Did you use any tools to write Test Cases?

Answer: Yes. I have used TestDirector (now called QualityCenter) and Rational TestManager to write Test Cases. However, in most of the companies, I used Excel sheet.


How many Test Cases did you write in your last project?
Answer: I wrote about 1100 Test Cases in my last project. (The reasonable number of Test Cases varies from 500 to thousands. The number 1100 test cases can be completed in 6-month project duration).

What document did you refer to write the Test Cases?

Answer: Requirement document. (NOTE: It can also be Use Cases, or Design Document. It depends company to company. In some company, they use Use Cases. In some companies, they use Requirement Documents and in companies, they use Design Document. However, in practical scenario, most of the companies have requirement document at least).

5. Did you have a situation where you did not have any documents (no requirement document, no Use Cases, or no Design Document) and you had to write the Test Cases? How did you write the Test Cases in this situation?

Answer: Yes. I have been to that kind of scenarios several times. There were companies where they had no documents at all. In that case, I had to discuss the application scenario and functionalities with the Business Analysts or developer. On the basis of that discussion, I prepared a document in consultation with Business Analysts and Developers and then started writing Plans and Test Cases.

6. What you worked with Use Cases before?

Answer: Yes. I have written Test Cases using Use Cases.



Can you tell me what a Use Case is?

Answer: A use case is a document that describes the user action and system response for a particular functionality.

7. What is SDLC (Software Development Life Cycle)?

Answer: SDLC (Software Development Life Cycle) is the process of developing software through business needs, analysis, design, implementation and maintenance. Software has to go through various phases before it is born which are as follows:

(i)Generating a Concept – A concept comes from the users of the software. For example, a Pizza Hut may need software to sell pizza. An Indian store may need software to sell its newly arrived movies or grocery. The owner of the company feels that he needs software that would help him in tracking his expenses and income as well as enhance the selling process. This is how the concept is generated. The owner will specifically tell the software company what kind of software he would need. In other words, he will specify his requirements.
(ii) Requirements analysis – After the owner (user) knows his requirements, then it is given to a software team (company) who will analyze the requirement and prepare requirement document that will explain every functionality that are needed by the owner. The requirement document will be the main document for developers, testers and database administrators. In other words, this is the main document that will be referred by everyone. After the requirement documents, other detailed documents many be needed. For example, the architectural design which is a blueprint for the design with the necessary specifications for the hardware, software, people and data resources.
(iii) Development: After the detailed requirement documents (some companies have design documents instead of requirement documents), the developers start writing their code (program) for their modules. On the other hand, the testers in the QA (Quality Assurance) Department start writing Test Plans (one module=1 test plan), test cases and get ready for testing.
(iv) Testing: Once the code (programs) are ready, they are compiled together and to make a build. This build is now tested by the software testers (QA Testers)
(v) Production: After testing, the application (software) goes into production (meaning, it will be handed over to the owner).
(vi) End: And one day, the owner will have say bye to the software either because the business grows and this software does not meet the demand or for some reason, the he does not need the software. That’s the end of it.

8. What is Business Requirement Document (BRD)?

Answer: It is a document that describes the details of the application functionalities which is required by the user. This document is written by the Business Analysts.

9. What is Business Design Document?

Answer: It is the document that describes the application functionalities of the user in detail. This document has the further details of the Business Requirement Document. This is a very crucial step in Software Development Life Cycle (SDLC). Sometimes the Business Requirement Document and Business Design Document can be lumped together to make only one Business Requirement Document.

10. What is a Module?

Answer: A ‘Module’ is a software component that has a specific task. It can be a ‘link’, which can go inside to its component detail. (This is NOT a very common question for the interview. This is just for your knowledge, if you don’t know what a module is)

http://www.portnov.com/data/Prakash_Nepal_interview_questions.html

Tuesday, February 22, 2011

Web Service Interview Questions and Answers

What is a Web service?
Many people and companies have debated the exact definition of Web services. At a minimum, however, a Web service is any piece of software that makes itself available over the Internet and uses a standardized XML messaging system.
XML is used to encode all communications to a Web service. For example, a client invokes a Web service by sending an XML message, then waits for a corresponding XML response. Because all communication is in XML, Web services are not tied to any one operating system or programming language--Java can talk with Perl; Windows applications can talk with Unix applications.
Beyond this basic definition, a Web service may also have two additional (and desirable) properties:
First, a Web service can have a public interface, defined in a common XML grammar. The interface describes all the methods available to clients and specifies the signature for each method. Currently, interface definition is accomplished via the Web Service Description Language (WSDL). (See FAQ number 7.)
Second, if you create a Web service, there should be some relatively simple mechanism for you to publish this fact. Likewise, there should be some simple mechanism for interested parties to locate the service and locate its public interface. The most prominent directory of Web services is currently available via UDDI, or Universal Description, Discovery, and Integration. (See FAQ number 8.)
Web services currently run a wide gamut from news syndication and stock-market data to weather reports and package-tracking systems. For a quick look at the range of Web services currently available, check out the XMethods directory of Web services.

What is new about Web services?
People have been using Remote Procedure Calls (RPC) for some time now, and they long ago discovered how to send such calls over HTTP.
So, what is really new about Web services? The answer is XML.
XML lies at the core of Web services, and provides a common language for describing Remote Procedure Calls, Web services, and Web service directories.
Prior to XML, one could share data among different applications, but XML makes this so much easier to do. In the same vein, one can share services and code without Web services, but XML makes it easier to do these as well.
By standardizing on XML, different applications can more easily talk to one another, and this makes software a whole lot more interesting.

I keep reading about Web services, but I have never actually seen one. Can you show me a real Web service in action?
If you want a more intuitive feel for Web services, try out the IBM Web Services Browser, available on the IBM Alphaworks site. The browser provides a series of Web services demonstrations. Behind the scenes, it ties together SOAP, WSDL, and UDDI to provide a simple plug-and-play interface for finding and invoking Web services. For example, you can find a stock-quote service, a traffic-report service, and a weather service. Each service is independent, and you can stack services like building blocks. You can, therefore, create a single page that displays multiple services--where the end result looks like a stripped-down version of my.yahoo or my.excite.

What is the Web service protocol stack?

The Web service protocol stack is an evolving set of protocols used to define, discover, and implement Web services. The core protocol stack consists of four layers:
Service Transport: This layer is responsible for transporting messages between applications. Currently, this includes HTTP, SMTP, FTP, and newer protocols, such as Blocks Extensible Exchange Protocol (BEEP).
XML Messaging: This layer is responsible for encoding messages in a common XML format so that messages can be understood at either end. Currently, this includes XML-RPC and SOAP.
Service Description: This layer is responsible for describing the public interface to a specific Web service. Currently, service description is handled via the WSDL.
Service Discovery: This layer is responsible for centralizing services into a common registry, and providing easy publish/find functionality. Currently, service discovery is handled via the UDDI.
Beyond the essentials of XML-RPC, SOAP, WSDL, and UDDI, the Web service protocol stack includes a whole zoo of newer, evolving protocols. These include WSFL (Web Services Flow Language), SOAP-DSIG (SOAP Security Extensions: Digital Signature), and USML (UDDI Search Markup Language). For an overview of these protocols, check out Pavel Kulchenko's article, Web Services Acronyms, Demystified, on XML.com.
Fortunately, you do not need to understand the full protocol stack to get started with Web services. Assuming you already know the basics of HTTP, it is best to start at the XML Messaging layer and work your way up.

What is XML-RPC?
XML-RPC is a protocol that uses XML messages to perform Remote Procedure Calls. Requests are encoded in XML and sent via HTTP POST; XML responses are embedded in the body of the HTTP response.
More succinctly, XML-RPC = HTTP + XML + Remote Procedure Calls.
Because XML-RPC is platform independent, diverse applications can communicate with one another. For example, a Java client can speak XML-RPC to a Perl server.
To get a quick sense of XML-RPC, here is a sample XML-RPC request to a weather service (with the HTTP Headers omitted):


weather.getWeather

10016


The request consists of a simple element, which specifies the method name (getWeather) and any method parameters (zip code).

Here is a sample XML-RPC response from the weather service:





65



The response consists of a single element, which specifies the return value (the current temperature). In this case, the return value is specified as an integer.
In many ways, XML-RPC is much simpler than SOAP, and therefore represents the easiest way to get started with Web services.
The official XML-RPC specification is available at XML-RPC.com. Dozens of XML-RPC implementations are available in Perl, Python, Java, and Ruby. See the XML-RPC home page for a complete list of implementations.

What is SOAP?
SOAP is an XML-based protocol for exchanging information between computers. Although SOAP can be used in a variety of messaging systems and can be delivered via a variety of transport protocols, the main focus of SOAP is Remote Procedure Calls (RPC) transported via HTTP. Like XML-RPC, SOAP is platform independent, and therefore enables diverse applications to communicate with one another.

To get a quick sense of SOAP, here is a sample SOAP request to a weather service (with the HTTP Headers omitted):


xmlns:SOAP-ENV="http://www.w3.org/2001/09/soap-envelope"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">

xmlns:ns1="urn:examples:weatherservice"
SOAP-ENV:encodingStyle=" http://www.w3.org/2001/09/soap-encoding
10016



As you can see, the request is slightly more complicated than XML-RPC and makes use of both XML namespaces and XML Schemas. Much like XML-RPC, however, the body of the request specifies both a method name (getWeather), and a list of parameters (zipcode).

Here is a sample SOAP response from the weather service:


xmlns:SOAP-ENV="http://www.w3.org/2001/09/soap-envelope"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">

xmlns:ns1="urn:examples:weatherservice"
SOAP-ENV:encodingStyle="http://www.w3.org/2001/09/soap-encoding">
65




The response indicates a single integer return value (the current temperature).
The World Wide Web Consortium (W3C) is in the process of creating a SOAP standard. The latest working draft is designated as SOAP 1.2, and the specification is now broken into two parts. Part 1 describes the SOAP messaging framework and envelope specification. Part 2 describes the SOAP encoding rules, the SOAP-RPC convention, and HTTP binding details.

What is WSDL?

The Web Services Description Language (WSDL) currently represents the service description layer within the Web service protocol stack.
In a nutshell, WSDL is an XML grammar for specifying a public interface for a Web service. This public interface can include the following:

Information on all publicly available functions.
Data type information for all XML messages.
Binding information about the specific transport protocol to be used.
Address information for locating the specified service.

WSDL is not necessarily tied to a specific XML messaging system, but it does include built-in extensions for describing SOAP services.

Below is a sample WSDL file. This file describes the public interface for the weather service used in the SOAP example above. Obviously, there are many details to understanding the example. For now, just consider two points.
First, the elements specify the individual XML messages that are transferred between computers. In this case, we have a getWeatherRequest and a getWeatherResponse. Second, the element specifies that the service is available via SOAP and is available at a specific URL.


targetNamespace="http://www.ecerami.com/wsdl/WeatherService.wsdl"
xmlns="http://schemas.xmlsoap.org/wsdl/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:tns="http://www.ecerami.com/wsdl/WeatherService.wsdl"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">















transport="http://schemas.xmlsoap.org/soap/http"/>



encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
namespace="urn:examples:weatherservice"
use="encoded"/>


encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
namespace="urn:examples:weatherservice"
use="encoded"/>





WSDL File for Weather Service

location="http://localhost:8080/soap/servlet/rpcrouter"/>



Using WSDL, a client can locate a Web service, and invoke any of the publicly available functions. With WSDL-aware tools, this process can be entirely automated, enabling applications to easily integrate new services with little or no manual code. For example, check out the GLUE platform from the Mind Electric.
WSDL has been submitted to the W3C, but it currently has no official status within the W3C. See this W3C page for the latest draft.

What is UDDI?
UDDI (Universal Description, Discovery, and Integration) currently represents the discovery layer within the Web services protocol stack.
UDDI was originally created by Microsoft, IBM, and Ariba, and represents a technical specification for publishing and finding businesses and Web services.
At its core, UDDI consists of two parts.
First, UDDI is a technical specification for building a distributed directory of businesses and Web services. Data is stored within a specific XML format, and the UDDI specification includes API details for searching existing data and publishing new data.
Second, the UDDI Business Registry is a fully operational implementation of the UDDI specification. Launched in May 2001 by Microsoft and IBM, the UDDI registry now enables anyone to search existing UDDI data. It also enables any company to register themselves and their services.
The data captured within UDDI is divided into three main categories:
White Pages: This includes general information about a specific company. For example, business name, business description, and address.
Yellow Pages: This includes general classification data for either the company or the service offered. For example, this data may include industry, product, or geographic codes based on standard taxonomies.
Green Pages: This includes technical information about a Web service. Generally, this includes a pointer to an external specification, and an address for invoking the Web service.
You can view the Microsoft UDDI site, or the IBM UDDI site. The complete UDDI specification is available at uddi.org.
Beta versions of UDDI Version 2 are available at:
Hewlett Packard
IBM
Microsoft
SAP

How do I get started with Web Services?
The easiest way to get started with Web services is to learn XML-RPC. Check out the XML-RPC specification or read my book, Web Services Essentials. O'Reilly has also recently released a book on Programming Web Services with XML-RPC by Simon St.Laurent, Joe Johnston, and Edd Dumbill.
Once you have learned the basics of XML-RPC, move onto SOAP, WSDL, and UDDI. These topics are also covered in Web Services Essentials. For a comprehensive treatment of SOAP, check out O'Reilly's Programming Web Services with SOAP, by Doug Tidwell, James Snell, and Pavel Kulchenko.

Does the W3C support any Web service standards?
The World Wide Web Consortium (W3C) is actively pursuing standardization of Web service protocols. In September 2000, the W3C established an XML Protocol Activity. The goal of the group is to establish a formal standard for SOAP. A draft version of SOAP 1.2 is currently under review, and progressing through the official W3C recommendation process.
On January 25, 2002, the W3C also announced the formation of a Web Service Activity. This new activity will include the current SOAP work as well as two new groups. The first new group is the Web Services Description Working Group, which will take up work on WSDL. The second new group is the Web Services Architecture Working Group, which will attempt to create a cohesive framework for Web service protocols.


Performance/ LoadRunner Interview Questions

  1. What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.
  2. What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.
  3. Did u use LoadRunner? What version? - Yes. Version 7.2.
  4. Explain the Load testing process? -
    Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.
    We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario.
    We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunnerâۉ„¢s graphs and reports to analyze the applicationâۉ„¢s performance.
  5. When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.
  6. What are the components of LoadRunner? - The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.
  7. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
  8. What Component of LoadRunner would you use to play Back the script in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.
  9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
  10. What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
  11. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.
  12. Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.
  13. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  14. How do you find out where correlation is required? Give few examples from your projects? - Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.
  15. Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  16. What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
  17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select
    Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select
    extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.
  18. How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.
  19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions we need to create the external
    library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.
  20. What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.
  21. Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.
  22. How do you perform functional testing under load? - Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.
  23. What is Ramp up? How do you set this? - This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be
    specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
  24. What is the advantage of running the Vuser as thread? - VuGen provides the facility to use multithreading. This enables more Vusers to be run per
    generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
    generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of
    Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.
  25. If you want to stop the execution of your script on error, how do you do that? - The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the âہ“Continue on errorâ€Â option in Run-Time Settings.
  26. What is the relation between Response Time and Throughput? - The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
  27. Explain the Configuration of your systems? - The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.
  28. How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
  29. If web server, database and Network are all fine where could be the problem? - The problem could be in the system itself or in the application server or in the code written for the application.
  30. How did you find web server related issues? - Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that
    occurred during scenario, the number of http responses per second, the number of downloaded pages per second.
  31. How did you find database related issues? - By running âہ“Databaseâ€Â monitor and help of âہ“Data Resource Graphâ€Â we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues
  32. Explain all the web recording options?
  33. What is the difference between Overlay graph and Correlate graph? - Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph showâۉ„¢s the current graphâۉ„¢s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graphâۉ„¢s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graphâۉ„¢s Y-axis.
  34. How did you plan the Load? What are the Criteria? - Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.
  35. What does vuser_init action contain? - Vuser_init action contains procedures to login to a server.
  36. What does vuser_end action contain? - Vuser_end section contains log off procedures.
  37. What is think time? How do you change the threshold? - Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.
  38. What is the difference between standard log and extended log? - The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.
  39. Explain the following functions: - lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.
  40. Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would
    be reasonable to conclude that the bandwidth is constraining the volume of
    data delivered.
  41. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types of goals in a goal oriented scenario:
    • The number of concurrent Vusers
    • The number of hits per second
    • The number of transactions per second
    • The number of pages per minute
    • The transaction response time that you want your scenario
  42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load
    increases. At 56 Vusers, there is a sudden, sharp increase in the average response
    time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.
  43. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  44. Where do you set automatic correlation options? - Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  45. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.

Monday, February 21, 2011

What is User Acceptance Testing? What is UAT Testing?

What is User Acceptance Testing?

User Acceptance Testing is often the final step before rolling out the application.


Usually the end users who will be using the applications test the application before ‘accepting’ the application.


This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.


This testing also helps nail bugs related to usability of the application.


User Acceptance Testing – Prerequisites:

Before the User Acceptance testing can be done the application is fully developed.
Various levels of testing (Unit, Integration and System) are already completed before User Acceptance Testing is done. As various levels of testing have been completed most of the technical bugs have already been fixed before UAT.


User Acceptance Testing – What to Test?

To ensure an effective User Acceptance Testing Test cases are created.
These Test cases can be created using various use cases identified during the Requirements definition stage.
The Test cases ensure proper coverage of all the scenarios during testing.


During this type of testing the specific focus is the exact real world usage of the application. The Testing is done in an environment that simulates the production environment.
The Test cases are written using real world scenarios for the application


User Acceptance Testing – How to Test?

The user acceptance testing is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects. It is generally assumed that the application would have already undergone Unit, Integration and System Level Testing.


However, it is useful if the User acceptance Testing is carried out in an environment that closely resembles the real world or production environment.


The steps taken for User Acceptance Testing typically involve one or more of the following:
.......1) User Acceptance Test (UAT) Planning
.......2) Designing UA Test Cases
.......3) Selecting a Team that would execute the (UAT) Test Cases
.......4) Executing Test Cases
.......5) Documenting the Defects found during UAT
.......6) Resolving the issues/Bug Fixing
.......7) Sign Off


User Acceptance Test (UAT) Planning:
As always the Planning Process is the most important of all the steps. This affects the effectiveness of the Testing Process. The Planning process outlines the User Acceptance Testing Strategy. It also describes the key focus areas, entry and exit criteria.


Designing UA Test Cases:
The User Acceptance Test Cases help the Test Execution Team to test the application thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all the scenarios.
The Use Cases created during the Requirements definition phase may be used as inputs for creating Test Cases. The inputs from Business Analysts and Subject Matter Experts are also used for creating.


Each User Acceptance Test Case describes in a simple language the precise steps to be taken to test something.


The Business Analysts and the Project Team review the User Acceptance Test Cases.


Selecting a Team that would execute the (UAT) Test Cases:
Selecting a Team that would execute the UAT Test Cases is an important step.
The UAT Team is generally a good representation of the real world end users.
The Team thus comprises of the actual end users who will be using the application.


Executing Test Cases:
The Testing Team executes the Test Cases and may additional perform random Tests relevant to them


Documenting the Defects found during UAT:
The Team logs their comments and any defects or issues found during testing.


Resolving the issues/Bug Fixing:
The issues/defects found during Testing are discussed with the Project Team, Subject Matter Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users.


Sign Off:
Upon successful completion of the User Acceptance Testing and resolution of the issues the team generally indicates the acceptance of the application. This step is important in commercial software sales. Once the User “Accept” the Software delivered they indicate that the software meets their requirements.


The users now confident of the software solution delivered and the vendor can be paid for the same.

Friday, March 26, 2010

Quality Assurance - QA Testing Interview Questions

What is testing?

Testing involves operation of a system or application under controlled conditions and evaluating the results. Making sure the Product developed as per the requirement.
To make sure the product meet the requirement, a test engineer must need to test each component +ve test & -ve test.
Test Normal and abnormal conditions
Quality Assurance (QA)

Customer Satisfaction was the buzzword of the 80's. Customer Delight (Something that gives great pleasure or enjoyment) is today's buzzword and Customer Ecstasy is the buzzword of the new millennium. Products that are not customer (user) friendly have no place in the market although they are engineered using the best technology. The interface of the product is as crucial as the internal technology of the product.
Test the product to make sure the product meets the end user requirement. Improving the performance, ensuring that problems are found and dealt with.
Software Development Life Cycle -- SDLC

Software life cycle begins when a software product is first conceived, and ends when it is no longer in use. It includes phases like: Initial concept, Requirements analysis, Functional design, Technical design, Coding, Unit test, Assembly Test, System test, Performance Test, Production Staging, Production Implementation, and Maintenance.

Initiation
Business need comes into the picture.
Concept document will be approved by the management
System Concept Development Stage
Find Resources (Developers, Testers, Leads, Mgrs etc)
Budget
what are the benefits
out come
fails?
Planning
what are the technologies?
Data collection
Requirement Analysis Phase
Business Analysts (BA) collect requirement from the client (or end user)

Write BRD (Business Requirement Document)

Implementing SRS – Software Requirement Specification

OR FRS (Functional Requirement Specification)

OR CRS (Component Requirement Specification)

SRS will be given to testers to get familiar with the product.

Design Phase
Data Modeling by Database architects (using Erwin Tools, UML diagrams
Functional Design, Technical Design
Walk through meeting.

Development Stage
Coding

Unit Testing

Assembly Testing

System Testing
Integration Test – Testing of the flow of the application

Functionality Test

Black Box Testing: Only concern with the input and output. Testing

Functionality of the Text box, Button etc.

White Box Testing: For developers. Test inside the logic

Regression Testing : Test -> if find a Bug(defect) -> Create defect in Defect tracking tool (Test Director)Defect will be assigned to Developer >Developer Fix >Again Test >if defect fixed , close the defect, if it is fixed but invented another bug because of this fix>reassigned to Developer.

Gress = Step --------- Regress = Restep
In regression test, Test cases from the Phase1 will be re tested in Phase2 enhancement. Re-testing of the application after bug fixing or enhancements, is to ensure that changes have not propagated unintended side effects and additional errors

Performance Test

Load Test – Load Runner – Response time

User Acceptance Test: Once the product is ready. Install the product at client location. The actual end user, test the product to make sure that the product meet their requirements

Maintenance

If the end user finds the bugs again, it goes to the test engineer to check the bug .Assign it to the developer to fix and test the fix, the fix will be applied to the end user product. This is called OGS – On Going support.
This cycle goes on till the product is no longer use.
Software Process Models

Linear Sequential Model/Waterfall Model
Prototyping Model
RAD Model(Rapid Application Development)
Evolutionary Software Process Model
Incremental Model
Spiral Model
Win Win Spiral Model
Concurrent Development Model
Component-Based Development Model
V-Model
Software Development Life Cycle

In our company, we are following the “Linear Sequential Model” or “Waterfall Model”. This model suggests a systematic approach to the software development that begins at

Analysis (SRS)
Design
Coding
Testing
Support

Analysis:

The requirement gathering process is focused specifically on software. To understand the nature of programs to be build. The software engineer must understand the information domain for the software, as well as required function, interface representation, behavior and performance. These requirements are documented and reviewed with the customer.

Design:

This design translates the requirements into the representation of the software that can be assessed for quality before coding begins. Design is actually a multistep process that focuses on four distinct attributes of a program. Data Structure, Software architecture, Interface representation and Procedural (algorithmic) details.

Coding:

The design must be translated into a machine readable form and this transformation is called coding. If design is performed in a detailed manner, coding can be done very easily.

Testing:

Once code has been generated, program testing begins. This testing process focuses on logical internals of the software ensuring that all statements have been tested to uncover errors and ensure that defined input will produce actual results that agree with the required results as stated in the SRS.

Support:

Software will undergo changes after it is delivered to the customer. Change will occur because errors have been raised; software must be adapted to accommodate changes in the external environment. Software support/maintenance reapplies each of the preceding phases to an existing program rather than a new one.

Software Process

SEI: Software Engineering Institute at Carnegie-Mellon University, initiated by the U.S. Defense Department to help improve software development process.

CMM: Capability Maturity Model, developed by the SEI. It’s a model of 5 levels of organizational maturity that determine the effectiveness in delivering quality software.

Level 1: Initial

The software process is characterized as ad-hoc and occasionally even chaotic. Success depends on individual effort.

Level 2: Repeatable

Basic Project Management Process is established to track cost, schedule and functionality. This process is to repeat earlier successes on projects with similar applications.

Key Process Areas (KPA):

Software Configuration Management

Software Quality Assurance

Software Subcontract Management

Software Project Tracking and Oversight

Software Project Planning

Requirements Management

Level 3: Defined

The software process for both management and engineering activities is documented, standardized, and integrated into an organization wide software process. All projects use a documented and approved version of the organization’s process for developing and supporting software. This level includes all characteristics defined for level 2.

Key Process Areas (KPA):

Peer Reviews

Intergroup Coordination

Software Product Engineering

Integrated Software Management

Training Program

Organization Process Definition

Organization Process Focus

Level 4: Managed

Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled using detailed measures. This level includes all characteristics defined for level 3.

Key Process Areas (KPA):

Software Quality Management

Quantitative Process Management

Level 5: Optimizing

Continuous process improvement is enabled by quantitative feedback from the process and from the testing innovative ideas and technologies. This level includes all characteristics defined for level 4.

Key Process Areas (KPA):

Process Change Management

Technology Change Management

Defect Prevention

ISO: International Organization for Standards.

IEEE: Institute of Electrical and Electronics Engineers.

ANSI: American National Standards Institute.

What is the importance of IEEE 829?

This standard is very useful for Software Testing. It explains various Test Documentations and gives formats for Test Plans.

What is testing?

Testing is the execution of a program under controlled conditions, with the intent of finding errors. These controlled conditions can be normal and abnormal. The benefit of testing is find out errors and get them corrected and makes the program run successfully with the functional and performance requirements as stated in the SRS document.

Kinds of Testing

Static Testing

Testing the application without executing the program. This can be done by code inspections, walkthroughs.

Dynamic Testing

Generate test data and executing the program.

Black Box Testing

Tests are based on functionality and requirements of the application. In this testing the functionality of the program is only considered and the input and output is properly accepted. It does not concentrate and how the program works internally to achieve the functionality.

(Or)

Test not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

White Box Testing

Tests are based on knowledge of internal logic of application code. Tests are based on coverage’s of code statements, branches, paths, and conditions.

Unit Testing

This is White box testing methodology, which more concentrates on internal logic of the program. This is done by developers to ensure that internal logic of the program is working as per the requirements.

Integration Testing

Testing of combined ‘parts’ of an application to determine, if they function together correctly. The ‘parts’ can be code modules, individual applications. To ensure that data interface between the modules/components are error free.

In integration testing, some modules may depend on other modules that are not available. In such cases, you may need to develop test drivers and test stubs.

Driver: A test driver simulates the part of the system that calls the component under test.

Stub: A test stub is a dummy component which simulates the behavior of real component.
System Testing

Testing the entire system (all modules are completed and integrated) for functionality and requirements of the client as stated in SRS.

Load Testing

Load testing evaluates system performance with pre-defined load level. Load testing measures how long it takes a system to perform various program tasks and functions under normal or pre-defined conditions.

Performance Testing

Performance test is done to determine system performance at various load levels. Performance testing is designed to test the run-time performance of software within the context of an integrated system.

Performance testing involves the evaluation of three primary elements:

System environment and available resources.
Workload
System Response time.
Measurements of Performance

Performance is measured by the Response time and Throughput.

Response Time: The amount of time the user must wait for a web system to react a request.

Throughput: The amount of data transmitted during client-server interactions. This is measured in kilo

bytes per second.

Stress Testing

Stress Testing is to check the behavior of the system that is pushed beyond their specified operational limits. Heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc...

Recovery Testing

Testing how well the system recovers from crashes, hardware failures and other unexpected software failures, network errors.

Security Testing

Testing how well the system protects against unauthorized internal or external access and willful damage.

Functionality Testing

This is Black box type testing geared to functional requirements of an application. This is done by testers.

(Or)

Concentrating specifically on functionality of the application. It is part of System testing.

Regression Testing

Re-testing of the application after bug fixing or enhancements, to ensure that changes have not propagated unintended side effects and additional errors.

Sanity Testing/Smoke Testing

This is the initial testing performed when the new build is evaluated in order to check out whether the build is ready for further or major testing.

User Interface Testing

This testing is done to check out the user interface or cosmetic errors. This testing is intended to the look and feel of the application as specified in the UI design document.

User Acceptance Testing

This is a system level testing generally done at the client’s environment. This is done with the realistic data of the client to find out errors. A series of acceptance tests are conducted to enable the customer to validate all requirements. Acceptance test can be done over a period of weeks or months.

(Or)

Final testing based on specifications of the end-user or customer, or based on use by end-user/customers over some limited period of time.

Alpha Test

Alpha test is conducted at the developer’s site by a customer to check the application meets the customer requirements.

(Or)

Alpha test is conducted at the developer’s site by a customer to validate all the requirements of the client.

Beta Test

Beta test is conducted at one or more customer sites by the end-user of the software to validate all the requirements of the client.

Ad-hoc Testing

This test is done randomly without executing any test cases with the intent of the finding errors.

Compatibility Testing

Testing how well the software performs in a particular hardware/software/network/operating system etc., environment.

(Or)

Compatibility testing determines if an application, under supported configurations, performs as expected with various combinations of hardware and software flavors and releases.

For example: compatibility testing would thereafter determine which manufactures and server brands, under the same configuration, are compatible with the web system.

Configuration testing

Configuration testing is designed to uncover errors related to various software and hardware combinations.

Configuration testing of web system involves the testing of various supported server software and hardware setups, browser settings, network connections, TCP/IP stack setups and so on.

For example: Configuration testing might validate that a certain web system installed on a dual-processor computer operates properly.

Usability Testing

Testing the application for user-friendliness. The effort required for learning and operating the application. Programmers and testers are usually not appropriate as usability testers. This can be done by the targeted end-users.

Scalability Testing

“Scalability testing is the ability of a system to handle an increased load without severely degrading its performance or reliability”. Web site scalability is defined by the difference in performance between a site handling a small number of clients and a site handling a larger number of clients, as well as the ability to maintain the same level of performance by simply adding resources to the installation.

There are many factors that affect scalability of a web application, including server hardware configuration, networking equipment and bandwidth, server operating system, volume and quantity of a back-end data, and so on.

Software Quality Assurance

Software QA involves the entire software development process, monitoring and improving the process, making sure that any agreed upon standards and procedures are followed. It is oriented to prevention. QA consists of the auditing and reporting functions of management.

Activities of QA:

Assist in monitoring the implementation of metrics process of the company.
Assist in performing function point analysis and estimate cost, schedule, and effort accordingly.
Assist in conducting reviews, audits and base lining of artifacts.
Implementing visual source safe (VSS) for configuration management.
Assist in taking training and orientation sessions on CMM for colleagues/team members in the Business Management System of the Organization.
Quality Control

Quality Control involves the series of inspections, reviews and tests used throughout the software development process to ensure each work product meets the requirements placed upon it. It is oriented to detection.

QC Activities:

Develop, implement and execute test methodologies and plans to ensure software product quality.
Developing test strategies, test plans and test cases to ensure quality based on user requirements document, high level and detailed designs.
Responsible for integration, system, acceptance and regression testing.
Responsible for test analysis reporting and follow up of findings through closure.
Creating test plans, test reports, and test manuals.
Design, set up and maintain test execution environment.
Software Quality

Quality software is reasonably bug free, delivered on time and within the budget, meets the requirements and is maintainable.

Measuring Quality

Correctness: A program must operate correctly. Correctness is the degree to which the software performs its required function. It is measured in defects per KLOC (Kilo Lines of Code).

Maintainability: Effort required to locate and fix an error in program, adapted if its environment changes.

Integrity: This measures systems ability to with stand attacks to its security. Attacks can be made on programs, data and documents.

Usability: Usability is an attempt to quantify user-friendliness.

Software Configuration Management

Software configuration management (SCM) is an umbrella activity that is applied throughout the software process. Configuration management is the art of identifying, organizing and controlling modifications to the software that is being built by the programming team. Because change can occur at any time, SCM activities are develop to Identify change, Control Change, Ensure that change is being properly implemented, Report changes to others who may have an interest. The goal is to maximize productivity by minimizing mistakes.

Good Code

Good Code is the code that works, bug free, readable and maintainable.

Good Design

‘Design’ could refer to many things, but often refers to ‘functional design’ or ‘internal design’. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable. For programs that have a user interface, it is often a good idea to assume that the end-user will have little computer knowledge and may not read a user manual or online help. Some common rules-of-thumb:

The program should act in a way that least surprises the user

It should always be evident to the user what can be done next and how to exit.

The program shouldn’t let the users do something stupid without warning them.

Verification

It refers to a set of activities that ensure that software correctly implements specific function.

This can be done with checklists, walkthroughs and inspection meetings.

Verification involves reviews and meetings to evaluate documents, plans, code, requirements and specifications.

Walk-Through

A Walk-Through is an informal meeting for evaluation or informational purposes.

No preparation is required.

Inspection

An Inspection is a more formalized meeting. It is typically a document such as requirement specification or test plan, and the purpose is to find out problems and see what is missing but not to fix anything. The result of the inspection meeting should be a written report.

Validation

It refers to the set of activities that ensure that software that has been built is traceable to customer requirements. Validation involves actual testing and takes place after verifications are completed.

Why does software have Bugs?

Software has bugs, due to

Miscommunication

Software Complexity

Programming errors

Changing requirements

Time Pressures

Poorly documented code

What is Test Data?

Test data is the minimal data required to execute a program under testing.

What is Test Plan?

A Test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. Its contents are scope, software/hardware requirements, and kinds of testing, effort, deliverables, automation, risk factors, exit criteria and assumption.

(Or)

What is the test plan and explain the contents of that?

“Test plan serves as the basis for all testing activities throughout the testing life cycle. Being an umbrella activity this should reflect the customer’s needs in terms of milestones to be met, the test approach (test strategy), resources required etc., the plan and strategy should give a clear visibility of the testing process to the customer at any point of time.”

Functional and performance test plans if developed separately will give more lucidity for functional and performance testing. Performance test plan is optional if the application does not entail any performance requirements.

What is Test Case?

A Test Case is a document that describes an input, action or event and an expected response to determine if a feature of an application is working correctly.

A Test case should contain particulars such as Test Case ID, Description, Test Data and Expected Results.

Test Case Design Techniques

What is Equivalence Partition?

In this approach, classes of inputs are categorized for product or function validation. This is usually does not include combinations of input but rather single state value based by class.

A single value in an equivalence partition is assumed to be representative of all other values in the partition.

The aim of equivalence partition is to select values that have equivalent processing; one can assume that if a test passes with the representative value then it should pass with all other values in same partition.

Ex:
Numeric values with negative, positive and 0 values

Strings those are empty or non–empty

List that are empty or non empty

Data files that exist or not, are readable/writable or not

What are Boundary Value Analysis and Equivalence Partition?

Equivalence Partitioning is based on the premise that inputs and outputs of a component can be partitioned into classes, which according to the component’s specifications, will be treated similarly by the component. This assumption is that similar inputs will evoke similar responses. A single value in an equivalence partition is assumed to be representative of all other values in the partition. This is used to reduce the problem that is not possible to test every input value. The aim of equivalence testing is to select values that have equivalent processing, One that we can assuming if a test passes with the representative value, it should pass with all other values in the same partition. That is we assume that similar some equivalence partitions may include combinations such as

Valid vs. Invalid input and output values

Numeric values with negative, positive and zero values.

Strings those are empty or non-empty

Lists those are empty or non-empty

Data files that exist are readable /writable or not.

Date years that are pre-2000 or post 2000, leap years or non-leap years (a special case is 29 February 2000 which is special processing of its own)

Dates that are in 28, 29, 30 or 31 day months

Days on workdays/outside office-hours

Type of data file, e.g. text, formatted data, graphics, video or sound

File source/destination, e.g. hard drive, floppy drive, CD-ROM, network

Equivalence partition is a set of test cases where the successful execution of any test case in the clause guarantees the success of all the test cases. In an equivalence partition for 1 set of correct test cases 2 sets of incorrect test cases are taken, in order to verify that the system behaves correctly for the correct test cases and wrongly for the incorrect test cases.

Boundary value analysis extends equivalence partitioning to include values around the edges of the partitions. As with equivalence partitioning, we assume that sets of values are treated similarly by components. However, developers are prone to making errors in the treatment of values on the boundaries of these partitions.

For example, elements of a list may be processed similarly, and they may be grouped into a single equivalence partition. However, in processing the elements, the developer may not have correct processing for either the first or last element of the list.

Boundary-values are usually the limits of the equivalence classes.

Example:

Monday and Sunday for weekdays

January and December for months

32767 and –32768 for 16-bit integers

Top-left and bottom-right cursor position on a screen

First line and last line of a printed report

1 January 2000 for two digit year processing

Strings of one character and maximum length strings

What is Review?

Reviews are to ensure that the work products under review meet the requirements.

What is Test Report?

A Test Report is a document that describes all the test cases that was conducted on the build by the testers. Its contents are Test Case ID, Description, Test Data, Expected Result, Actual Result, Status (pass/fail), Severity of the bug (H/M/L) and remarks.

What is Test Case Design?

A good test case should identify the undiscovered errors in testing.

We must design test cases that have the highest likelihood of finding the most errors with a minimum amount of time and effort.

Identify test cases for each module

Write test cases in each executable step

Design more functional test cases

Clearly identify the expected results for each test case

Design the test cases for workflow so that the test cases follow a sequence in the web application during testing. For example, for mail applications say yahoo, it has to start with a registration process for new users, then signing up, composing mail, sending mail etc.,

Security is high priority in web testing. Hence document enough test cases related to application security.

Develop trace ability matrix to understand the test case coverage with the requirements.

What is Use Case?

A use case is a scenario that describes how software is to be used in a given situation.

What is System Test Plan?

This document forms the basis for system testing. This plan clearly focuses the testing that will be conducted on the test bed environment. This document along with SRS, Detailed design document forms the basis for test case document.

Test Strategy (Or) Testing process in your company?

In our company, QC is involved from the project initiation meeting along with development team and the tester for each project is identified. All these members will participate in the knowledge transfers and discussions.

SRS and Design document are the inputs to the test plan. After the test plan is prepared, it will send to the project leader and team leads for review purpose. The review comments are incorporated into the rest plan.

Re-review is conducted if required. QA team conducts an audit and then the plan is base lined.

Now these, SRS, design and test plan documents will help in preparation of test cases. Our company-approved templates are used for test case documentation. A peer review of test cases is done within the department. These test cases are then sent for review to the development team to check for their sufficiency of test cases and the functionality of the application is fully covered in the test cases. Based on the review report sent by the technical team, the test cases are then updated/new cases are added and sent for re-review if required. Before test execution starts, the test bed environment will setup.

When a particular module is ready for testing, the development team will place the build in VSS and intimate it to the testing team. The tester will deploy the release on the test bed environment and start the execution of test cases. After the execution is completed, the tester will post the bugs into the defect manager tool.

Test report is prepared with the results of the tests that were conducted on the build. These are discussed with the development team and then they are published in the report and also updated if need.

Regression testing will be conducted on the application after the fixes are made to check if the fixes have not caused any adverse effects.

If completion criteria are satisfied, then all the test deliverables are submitted to the test lead. This is the way we do testing in our company.

Bug Life Cycle (or) Defect Tracking Flow Chart

Once a bug is found, testing team will assign it to the development team using automated tool called Defect Manager. While posting first time, the status of bug will be set as “OPEN” automatically.
After fixing the bug, Development team will re-assign it to the testing team (tester) by changing the status as “FIXED”.
During regression resting Testing team checks for the same bug, if that bug is fixed, they will close the bug by changing the status as “CLOSED” otherwise once again they will re-assign to the development team by changing the status as “RE-OPEN”, like that it works…
What is Severity & Priority of a Bug? What is the difference between them?

Severity indicates the impact of each defect on testing efforts or users and administrator of the application under test. This information is used by the developers and management as the basis for assigning priority of work on defects.

Critical (Show stopper): An item that prevents further testing of the product or function under test.

For example, an installation process which does not load a component, a general protection fault(GPF) or other situation which freezes the system(requiring a reboot) or missing menu option or security permission required to access a function under test.

High: An item that does not function as expected or designed.

For example, inaccurate calculations, the wrong fields being updated, the wrong rule, phrase or data being retrieved, an updated operation that fails to complete, slow system turn-around performance.

Medium: Annoyances that does not conform to standards and conventions.

For example, incorrect hot-key operation, an error condition which is not trapped, matching visual and text links which lead to different end points.

Low: Cosmetic issues which are not crucial to the operation of the system.

For example, Misspelled or ungrammatical text, inappropriate, inconsistent, incorrect formats such as text font, size, alignment, color, etc.

Classifications of the Bugs

Testing

Missing test cases: Test cases are not addressing all design aspects.
Ex: Test case to test the call back functionality is missing.

Inadequate/Incomplete Test cases: Test action pre-requisite is incomplete or inadequate.
Ex: Some necessary validations might not be addressed in the test cases. Inadequate test coverage.

Ambiguous Test cases: Test cases not clear to the re-viewer.
Ex: If the test case is ‘click the key’, not specifying which key to be pressed.

Deviation from standards: Test setup described is unrealistic or not adequate to conduct the test cases.
Editorial(Spelling/Grammar mistakes): Any alignment or spelling mistakes of the labels etc.,
Ex: ‘Assumptions’, instead of ‘Assumptions’

Incorrect Test cases: Test functionality and Test case not matching.
Incorrect Expected Result: If the expected result has been captured incorrectly in the test cases.
Ex: If the expected result is captured as ‘should display a message box with ok button’, is captured incorrectly as ‘should display an decision/query box with ‘Yes’ and ‘No’ buttons’.

Fields not properly addressed
Ex: Instead of capturing the fields as ‘ambiguous’, capturing it as ‘Not clear’.

Duplicate/Repetition of test cases: If the same test cases are repeated for different screens having the same parent screen.
Ex: If the screen differs based on the selection we make in the parent screen, capturing of the details in the parent screen for all the child screens.

Entry Criteria:

Test bed environment

Application under test

Base line test cases

Code review report

Unit test report

Unit test sign off sheet

Exit Criteria:

All the test cases should be executed on the test bed environment.

All the defects reported in the test report should be closed.

There should not be any High severity bugs and Medium severity bugs.

Low severity bugs should be tracked to closure.

Input Documents for Testing

SRS (Software Requirement Specifications)

Detailed Design

Test Plan

Test Case

Configuration Item

The item, which is eligible for configuration.

i.e., uploaded into configuration management.

Ex: VSS (Visual Source Safe)

Configurable Items in Testing

Test Plan

Test Cases

Review Report (Test Cases)

Test Report

What is Web Testing?

Web testing is testing of either internet or intranet web applications where the client interface is an internet browser. The browser can be anything like Internet explorer, Netscape navigator and Opera etc.

Approach:

Any testing process will start with the planning of test (Test Plan), building a strategy (how to test), preparation of test cases (what to test), execution of test cases (testing) and end in reporting the results (defects).

Practically it is not difference it is the priority areas which needs to be set for a web testing. For web testing the following few key focus areas like compatibility, navigation, user interaction, usability, performance, scalability, reliability and availability etc., can be considered during the testing phase.

What is the difference between Client Server Testing & Web Testing?

Client Server Testing Web Testing
Client Server transactions are limited Web transactions are unpredicted and unlimited
User behavior is predictable and controllable User behavior is predictable and controllable
System variables are LAN, Centralized H/W, S/W Firewalls, Routers, hosting company-caching systems.
Failures are notice internally It is known externally.
Software Metrics

“Metrics provides the information regarding the performance of a project /product.”
The data collected during different phases shall be used for managing the project and product development.
It can be used by the project/product development personals for estimation.
Collected metrics stored in the process/metric database.
Project Manager shall use the past project data to estimate size, effort, schedule, resource and cost for the current project as well as its QA activities.
Project Leader shall use data from similar projects and shall arrive at productivity factor in order to carry out the estimation.
Measurements are captured at each end of phase.

Inputs for Metrics

Timesheet
Project Management Plan
Project status Report
Project Closure Report
Review Report
Test Case
Test Report
Training Plan
Training Feedback Form
User Requirement Document
Internal Quality Audit
Audit Calendar
Configuration Status According
Metrics Plan

Metrics plan describes Organization’s goals for project/product quality, productivity, product development cycle schedule, effort, Cost, Size, Measurement, Project Standard Software process, Data capturing method, Data Control Points etc.

Testing Metrics
Defect Density

Formula: (Total No. Of Defects)/size*100

Units: In Nos. (%)

Periodicity to calculate: Project Closure

Residual Defect Density

Formula: No. of defects found after system testing /Size (KLOC)

Units: In Nos. (%)

Periodicity to calculate: Project Closure

Effort Variance

Formula: 100*(Actual Person-hours expended - Estimated hours)/ Estimated person-hours

Units: In Nos. (%)

Periodicity to calculate: Phase-wise

Size Variance

Formula: 100*(Actual Size- Estimated Size)/ Estimated Size

Units: FP (Functional Points) or KLOC (Kilo Lines of Code)

Periodicity to calculate: End of every phase & project closure
Schedule Variance

Formula: 100*(Actual Elapsed Duration- Estimated Duration)/ Estimated Duration

Units: Working days

Periodicity to calculate: Phase-Wise

Productivity

Formula: Size/Effort

Units: FP/Person days (days=working days)

Periodicity to calculate: Project Closure

Testing Tips

Testing is the process of identifying defects, where a “defect” is any variance between actual and expected results.

Editable fields checking and validation:

Valid/invalid characters/strings, data in all editable fields
Valid minimum/maximum/mid range values in fields
Null strings(no data) in required fields
Record length(character limit) in text/memo fields
Cut/Copy/Paste into/from fields when possible
Non-editable fields checking:

Check for all test/spelling in warnings and error messages/dialogs
Invoke/check all menu items and their options
Application Usability

Appearance of outlook(placement and alignment of objects on screen)
User interface test(open all menus, check all items)
Basic functionality checking(file + open + save, etc.,)
Right mouse clicking sensitivity
Resize/min/max/restore app, windows(check minimum application size)
Scroll ability when applicable(scroll bars, keyboard, auto scrolling)
Keyboard and mouse navigation, highlighting, dragging, drag/drop
Print in landscape an portrait modes
Check F1, What’s This, Help menu
Short-cut and accelerator keys
Tab key order and Navigation in all dialog boxes and menus
Basic Compatibility:

16 bit OS (win 3.x, win95, OS/2, want 3.x)

32 bit OS (win95/98/2000/NT) UNIX

What is the base line for Performance Testing?

Functional and performance document or the SRS document.

What is the Load Balance of the application?

Balancing the load with different servers.

Ex: There is a server of capacity of handling 100 users at a time. If the no. of users exceeds the limit of 100 users, then they will automatically transfer to another server to balance the load.

What is your Bug clearance ratio?

(Ratio between the valid and invalid bugs)

96:100
How many Test cases u can prepare in a day?

60-70 Test cases

How many Test Cases you can Execute in a day?

70-90 Test cases

Which tool are you using for Configuration Management?

Visual Source Safe (VSS) from Microsoft

What is Six Sigma?

Six Sigma is a customer-focused philosophy for improving quality that was first developed by Motorola in 1980’s.

What is Process & Procedure?

Process is tested with the intent to improve it. Process is defined as the execution. (Or) Process is flow tasks.

Procedure is tested with the intent to increase its quality. Procedure is one, which is designed to perform an intended functionality. How you are implementing the process is called procedure.

How do you rate yourself in Testing?

7-8
We do testing of the application against to the customer requirements. How do you proceed, if there are no requirements?

Based on target users.

What is the best tester to developer ratio?

It should be nearly tester/developer=nearly 40%

Some companies have a 1:1 ratio (at one point, Microsoft’s OS development team was an example of that -- in fact they also had an integration team, which meant that there were more testers than developers). There are companies where they have only one or two integration testers and all the developers are required to run a suite of integration tests before they check their code to make sure that they have not introduced defects. In these environments, the ratio can be 1:25. Some locations even have no SQA team or department at all, so the ratio is 0:1.

Which approach of integration testing are you following?

Top – down

Have you prepared test plan?

No, but involved in test plan.

What makes a good test engineer?

A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process gives the tester an appreciation for the developer’s point of view and reduces the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.

What if there isn’t enough time for thorough testing?

Use risk analysis to determine where testing should be focused.

Which functionality is most important to the project’s intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest financial impact on users?
What kinds of problems would cause the worst publicity?
What kinds of tests could easily cover multiple functionalities?

What can be done if requirements are changing continuously?

A common problem and a major headache

Work with the project’s stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible
It is helpful if the application’s initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
If the code is well commented and well documented this makes changes easier for the developers.
Negotiate to allow only easily implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
Focus less on detailed test plans and test cases and more on ad-hoc testing.
What is the difference between ISO & CMM?

ISO 9000 Standards SEI-CMM
Generic Standard Maturity Model
Applicable for all kinds of Organizations Applicable only to Software Organizations
Contain 20 Clauses Contain 18 “Key Process Area’ (KPAs)
Documentation called Qs Manual Documentation called work products and Artifacts
Certification Audit is like an Examination Final assessment is Collaborative
Certification is a pass/fail outcome The result of the assessment is a quantitative score of the maturity of software development process
What kinds of Testing you perform?
Explain your Quality Effort in your company?
Describe Quality as you understand it?
What is the difference between Integration Testing & System Testing?
What documents would you need for QA?
Explain your involvement in the Test Plan for your project?
What is the difference between Management system and Quality?
What is Conformance & Non Conformance?
What is Corrective Action & Preventive Action?
What is Quality Monitoring & Quality Measurement?
What is Audit?
How do you rate yourself in achieving deadline?