Interview Questions and Answers for QTP (Quick Test Professional)

Tuesday, November 29, 2005

Checklist for Acceptance Test

Test Preparation

- Has the plan for acceptance testing been submitted?
- Have all possible interactions been described?
- Are all input data required for testing available?
- Is it possible to automatically document the test runs?
- Have the customer specific constraints been considered?
- Have you defined acceptance criteria (e.g. performance, portability, throughput, etc.) on which the completion of the acceptance test will be judged?
- Has the method of handling problems detected during acceptance testing and their disposition been agreed between you and the customer?
- Have you defined the testing procedure, e.g. benchmark test?
- Have you designed test cases to discover contradictions between the software product and the requirements, if existent?
- Have you established test cases to review if timing constraints are met by the system?

Test Execution and Evaluation

- Has the acceptance test been performed according to the test plan?
- Have all steps of the test run been documented?
- Have the users reviewed the test results?
- Are the services provided by the system conform to user requirements stated before?
- Have the users judged about acceptability according to the predetermined criteria?
- Has the user signed-off on output?

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, November 22, 2005

Writing effective test cases

Testing cannot ensure complete eradication of errors. Various types of Testing have their own limitations. Consider Exhaustive black box and white box testing, which are practically not "Exhaustive" to the verbose as they are required to be, owing to the resource factors. While testing a Software Product, one of the most important things is the design of effective test cases. A Tester tries to ensure quality output in Black box testing by identifying which subset of all the possible test cases has highest probability of detecting most of the errors?

A test case is there in place to describe how you intend to empirically verify that the software being developed, confirms to its specifications. In other words, the author needs to cover all the possibilities that it can correctly carry out its intended functions. An independent tester to carry the tests properly should write the test case with enough clarity and detail.

Each Test case would ideally have the actual input data to be provided and the expected output. The author of Test cases should mention any manual calculations necessary to determine the expected outputs. Say a Program converts Fahrenheit to Celsius, having the conversion formulae in Test case makes it easier for the Tester to verify the result in Black box testing. Test Data can be tabulated as in a column of input items and the corresponding column of expected outputs.

Though when we talk about random input testing, there is a little chance of being near or exactly around the probability of detecting most of the defects. Hence, the Author is required to give more attention to the certain details.

It requires thought process that allows tester to select a set of test data more intelligently. He will try to cover a large set of Probabilities of occurrence of error, in other words generating as many Scenarios for the test cases. Besides, he looks in other possible errors to ensure that the document covers the presence or absence of errors in the product.

The approach of an author/tester towards Black box testing has been focused here.

Black box Testing:
Functional testing addresses the overall behavior of the program by testing transaction flows, input validation and functional completeness. Which is known as Black box Testing. There are four important techniques, which are significantly important to derive minimum test cases and input data for the same.

Equivalence partitioning:
An equivalence class data is a subset of a larger class. This data is used for technically equivalence partitioning rather than undertaking exhaustive testing of each value in the larger set of data. For example, a payroll program, which edits professional tax deduction limits within Rs. 100 to Rs. 400, would have three equivalence partitions.

Less than Rs.100/- (Invalid Class)
Between Rs.100 to Rs.400/- (Valid Class)
Greater than Rs.400/- (Invalid Class)

If one test case from one equivalence class results in an error, all other test cases in the equivalence class would be expected to result the same error. Here, tester needs to write very few test cases, which is going to save our precious time and resources.

Boundary Value Analysis:
Experiences show that the test cases, which explore boundary conditions, have a higher payoff than test cases that do not. Boundary conditions are the situations directly on, above and beneath the edges of input and output equivalence classes.

This technique consists of generating test cases and relevant set of data, that should focus on the input and output boundaries of given function. In the above example of professional tax limits, boundary value analysis would derive the test cases for:

Low boundary plus or minus one (Rs.99/- and Rs.101/-)
On the boundary (Rs.100/- and Rs.400/-)
Upper boundary plus or minus one (Rs.399 and Rs.401/-)

Error Guessing:
This is based on the theory that test cases can be developed, based upon intuition and experience of the test engineer. Some people tend to adapt very naturally with program testing. We can say these people have a knack for ’Smelling out’ errors without incorporating any particular methodology.
This “Error Guessing” quality of a tester enables him to put in practice, more efficient and result oriented testing than a test case should be able to guide a Tester.
It is difficult to give procedure for the error guessing technique since it is largely intuitive and ad hoc process. For example Where on of the input is the date test engineer may try February 29,2000 or 9/9/99.

Orthogonal Array:
Particularly this technique is useful in finding errors associated with region faults. An error category associated with faulty logic within software component.

For example there are three parameters (A, B & C) each of which has one of the three possible values. Which may require 3X3X3=27 Test cases. But because of the way program works it is probably it is more likely that the fault will depend on the values of only two parameters. In that case fault may occur for each of these 3 test cases.
1. A=1,B=1,C=1
2. A=1,B=1,C=2,
3. A=1,B=1,C=3

Since the value of the 'C' seems to be irreverent to the occurrence of this particular fault, any one of the three test cases will suffice. Depending upon the above assumption, test engineer may derive only nine test cases. Which will show all possible pairs within all three variables. The array is orthogonal because of each pair of parameters all combination of their values occurs once.

That is all possible pair wise combination between parameters A & B, B & C, C & A are shown since we are thinking in terms of pairs we say this array has strength of 2, It does not have strength of 3,
because not all thee way combination occurs A=1, B=2, C=3 for example, don’t appear but it covers the pair wise possibilities which is what we are concern about.

White box Testing:
Structural testing includes path testing, code coverage testing and analysis; logic testing nested loop testing and many similar techniques. Which is known as white box testing.

1. Statement Coverage: Execute all the statements at least once.
2. Decision Coverage: Execute each decision directions at least once.
3. Condition Coverage: Execute each condition with all possible outcomes at least once.
4. Decision / Condition Coverage: Execute all possible combinations of condition outcomes in each decision. Treat all iterations as two way conditions exercising the loop zero times and Once.
5. Multiple Condition Coverage: Invokes each point of entry at least once.

A Tester would choose a combination from above technique that is appropriate for the application and available time frame. A very detailed focus on all these aspects would lead to too much of vague information at times.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, November 16, 2005

System Testing

This type of test involves examination of all the system for software components, all the hardware components and any interfaces.
The whole system is checked not only for validity but also for met objectives. The complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.

System testing should include recovery testing, security testing, stress testing and performance testing.

Recovery testing uses test cases designed to examine how easily and completely the system can recover from a disaster (power shut down, blown circuit, disk crash, interface failure, insufficient memory, etc.). It is desirable to have a system capable of recovering quickly and with minimal human intervention. It should also have a log of activities happening before the crash (these should be part of daily operations) and a log of messages during the failure (if possible) and upon re-start.

Security testing involves testing the system in order to make sure that unauthorized personnel or other systems cannot gain access to the system and information or resources within it. Programs that check for access to the system via passwords are tested along with any organizational security procedures established.

Stress testing encompasses creating unusual loads on the system in attempts to brake it. System is monitored for performance loss and susceptibility to crashing during the load times. If it does crash as a result of high load, it provides for just one more recovery test.

Performance testing involves monitoring and recording the performance levels during regular and low and high stress loads. It tests the amount of resource usage under the just described conditions and serves as basis for making a forecast of additional resources needed (if any) in the future. It is important to note that performance objectives should have been developed during the planning stage and performance testing is to assure that these objectives are being met. However, these tests may be run in initial stages of production to compare the actual usage to the forecasted figures.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, November 15, 2005

More Testing types: Funny

Have you ever done Testing to Obsession?
Well, we did it once and with that particularly long and painful bout of regression testing, we came up with
the list of other types of testing we ‘d like not to see –


Aggression Testing: If this doesn't work, I'm gonna kill somebody.
Compression Testing: [ ]
Confession Testing: Okay, Okay, I did program that bug.
Depression Testing: If this doesn't work, I'm gonna kill myself.
Digression Testing: Well, it works, but can I tell you about my truck...
Expression Testing: #@%^&*!!! a bug.
Obsession Testing: I'll find this bug if it's the last thing I do.
Oppression Testing: Test this now!
Repression Testing: It's not a bug it's a feature.
Suggestion Testing:Well, it works but wouldn't it be better if...

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Monday, November 14, 2005

Responsibilities of a Test Manager

Responsibilities of a Test Manager:

− Manage the Testing Department.
− Allocate resource to projects.
− Review weekly Testers' status reports and take necessary actions
− Escalate Testers' issues to the Sr. Management.
− Estimate for testing projects.
− Enforce the adherence to the company's Quality processes and procedures.
− Decision to procure Software Testing tools for the organization.
− Inter group co-ordination between various departments.
− Provide technical support to the Testing team.
− Continuous monitoring and mentoring of Testing team members.
− Review of test plans and test cases.
− Attend weekly meeting of projects and provide inputs from the Testers' perspective.
− Immediate notification/escalation of problems to the Sr. Test Manager / Senior Management.
− Ensure processes are followed as laid down.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, November 11, 2005

Responsibilities of a Test Lead

Responsibilities of a Test leader:

− Prepare the Software Test Plan.

− Check / Review the Test Cases document
– System, Integration and User Acceptance prepared by test engineers.
− Analyze requirements during the requirements analysis phase of projects.
− Keep track of the new requirements from the Project.
− Forecast / Estimate the Project future requirements.
− Arrange the Hardware and software requirement for the Test Setup.
− Develop and implement test plans.
− Escalate the issues about project requirements (Software, Hardware, Resources) to Project Manager / Test Manager.
− Escalate the issues in the application to the Client.
− Assign task to all Testing Team members and ensure that all of them have sufficient work in the project.
− Organize the meetings.
− Prepare the Agenda for the meeting, for example: Weekly Team meeting etc.
− Attend the regular client call and discuss the weekly status with the client.
− Send the Status Report (Daily, Weekly etc.) to the Client.
− Frequent status check meetings with the Team.
− Communication by means of Chat / emails etc. with the Client (If required).
− Act as the single point of contact between Development and Testers for iterations, Testing and deployment activities.
− Track and report upon testing activities, including testing results, test case coverage, required resources, defects discovered and their status, performance baselines, etc.
− Assist in performing any applicable maintenance to tools used in Testing and resolve issues if any.
− Ensure content and structure of all Testing documents / artifacts is documented and maintained.
− Document, implement, monitor, and enforce all processes and procedures for testing is established as per standards defined by the organization.
− Review various reports prepared by Test engineers.
− Log project related issues in the defect tracking tool identified for the project.
− Check for timely delivery of different milestones.
− Identify Training requirements and forward it to the Project Manager (Technical and Soft skills).
− Attend weekly Team Leader meeting.
− Motivate team members.
− Organize / Conduct internal trainings on various products.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, November 08, 2005

Boundary Value Analysis (BVA)

Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code, is to exercise it at its natural boundaries.

E.g. if a text box is supposed to accept values in range 1 to 100, then try providing the following values:
1) 1, 100, values between 1 and 100 on sampling basis,
2) 0, 101,
3) Negative values,
4) Extremely large values

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!