Interview Questions and Answers for QTP (Quick Test Professional)

Monday, January 16, 2006

WinRunner Interview Questions

1. How to recognise the objects during runtime in new build version (test suite) comparing with old guimap?
2. Wait(20) - What is the minimum and maximum time the above mentioned synchronization statements will wait given that the global default timeout is set to 15 seconds.
3. Where in the user-defined function library should a new error code be defined?
4. In a modular test tree, each test will receive the values for the parameters passed from the main test. These parameters are defined in the Test properties dialog box of each test.Refering to the above, in which one of the following files are changes made in the test properties dialog saved?
5. What is the scripting process in Winrunner?
6. How many scripts can we generate for one project?
7. What is the command in Winrunner to invoke IE Browser? And once I open the IE browser is there a unique way to identify that browser?
8. How do you load default comments into your new script like IDE's?
9. What is the new feature add in QTP 8.0 compare in QTP 6.0
10. When will you go to automation?
11. How to test the stored procedure?
12. How to recognise the objects during runtime in new build version (test suite) comparing with old guimap?
13. what is use of GUI files in winrunner?
14. Without using the datadriven test, how can we test the application with different set of inputs?
15. How do you load compiled module inside a comiled module?
16. Can you tell me the bug life cycle?
17. How to find the length of the edit box through WinRunner?
18. What is file type of WinRunner test files, its extension?
19. What is candidate release?
20. What type of variables can be used with in the TSL function?

More WinRunner Interview Questions and Answers

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, January 04, 2006

Have I Tested Enough?

One of the most intriguing and difficult questions to answer in any software development life cycle is whether the software is defect free. No matter what the level of maturity of software application, it is next to impossible to say that a software application is defect free. We can answer this question to a certain level by collecting data with regard to the performance and reliability of the software. This data is collected on the basis of what is commonly known as 'Testing'.

This gives rise to another not so easily answerable question, has the software been tested enough? Unfortunately there is no formula that can answer this question in black and white. And hence the tester has to rely on certain vital signs that can help him release the code. It is important for us to understand the objectives of testing before we can answer these questions.

Some of the popular views of the objectives of testing are -
Testing is a process of executing a program with the intent of finding an error.
A good test case is one that has a high probability of finding an as-yet undiscovered error.
A successful test is one that uncovers an as-yet undiscovered error.

The above axioms are taken from a book on Software Engineering by Glen Myers. The above axioms give us an altogether different perspective on testing. Testing, for long has been regarded as an activity that helps us to say that the program works. But the actual objective of testing is to find the many ways by which the program can go wrong and to fix those 'bugs'. Another important thing to understand over here is that, testing can only show the existence of bugs by uncovering them and not the absence of them. A bug detected is good but a bug undetected is a cause for concern simply because the cost for fixing it goes up almost exponentially over a period of time. Therefore, the sooner the bug is detected the better it is for your financial coffers.

The whole process of testing involves various testing strategies, procedures, documentation, planning and execution, tracking of defects, collecting metrics et al. No wonder with all these activities, testing is given the status of a phase in itself under the software development life cycle. Testing is a very critical activity that can make the difference between a satisfied customer and a lost customer. That is why it is of paramount importance that the testing team is a creative unit and well focused on the job of testing. There are many ways and techniques that a testing team can adopt to test a particular program or software or requirement. We will not get into the details of those methodologies. The scope of this discussion is to look for those 'vital signs' that will help us in making a decision call with regards to marking the end of a testing phase.

Following are the vital signs -
Test Planning
Test Case Design and Coverage
New Requirements
Defect log
Regression Test suite
Project Deadlines and Test Budget

Now let us look at each of these vital signs individually so that we get a better understanding of how we can use them as vital signs.

Test Planning
It is essential to have an effective plan for testing. The Test Plan should clearly indicate what are the Entry and Exit criteria. Having defined these criteria well, it would be easier to analyze the stability of the software after tests have been performed on them. Requirement traceability analysis can be done to be sure that the requirements that have been identified as testable have corresponding test cases written for them. A Traceability matrix can be used for this exercise. It is also important to prioritize the test cases to account for risks into mandatory, required and desired test cases so that the important ones can be executed first up and reduce the effect of time constraints on these critical test cases.

Test Case Design and Coverage
Test Coverage deals with making sure that the test cases that have been designed for testing the requirements of a particular release of a software application are optimal and address the requirements correctly. The traceability matrix made while test planning will give us an idea of whether there are corresponding test cases for every requirement. Whereas a coverage analysis will be able to tell us whether the test cases drafted are the right ones and if they are enough. The most effective way of analyzing is through Reviews or Inspections. Reviews and Inspections can involve conducting meetings wherein the participants review certain documents like test cases in this case. The test cases are sent to the reviewers in advance to go through them and during the course of the meeting any inadequacies in the test cases can be dug out. Certain testing techniques like Boundary Value Analysis, Equivalence Partitioning, Valid and Invalid test cases can be incorporated while designing test cases to address the coverage issue.

New Requirements
Most of the software releases generally have a set of new requirements or features implemented to enhance the ability of the software. And when testing a particular release of a software application, the concentration is more on these new requirements. Prioritizing the execution of test cases for the new requirements helps the tester to stay focused on the job on hand.

A high priority can be assigned to these test cases while planning test execution. Sometimes software releases go through a lot of changes in requirements and addition of new requirements. Though this is not acceptable, the business demands for the addition of new requirements or changes to already frozen requirements, which is exactly why software development is such a dynamic process. In such a scenario, it is essential for the tester to keep track of the changes in requirements and have certain test cases designed for them. The time to go through the entire process of test planning and reviews may not be sufficient at this juncture but keeping track of these changes through documents assist the testers in knowing the status of these requirements as tested or not.

Defect Log
The defect log gives a clear indication of the quality and stability of the product. If there are severe defects that are still open, it indicates that the quality of the product is still not up to the mark. And that testing still needs to be done to uncover more such severe defects. But on the other hand if there are no high severity defects open and the number of low severity defects is relatively low, the development team can negotiate with the testing team in order to move the software into production. The use of a proper defect tracking system or tool is advisable to keep a defect log and generate reports on defect status.

Regression Test suite
While planning the testing phase, it is important to have regression testing cycles also planned. A minimum of two to three regression cycles is necessary to gain confidence in the stability of the software. The advantages of regression testing are two fold

- Uncovers any defects that have gone unnoticed in previous builds
- Uncovers defects that arise due to fixes to existing defects

Automation tools can be used to write scripts that will do a regression test in order to reduce cycle time for testing. Assigning criticalities to test cases will help in choosing and creating a regression test suite to help prioritize execution of manual test cases. But automated scripts are the best way to run and record a log of regression testing. If there is no defect found while running these scripts, we can be assured of the existing functionality being relatively stable.

Project Deadlines and Test Budget
In most real life scenarios the end of testing is defined by the project deadlines and the depletion of the budget for testing. Though many software products and services go into production with a negotiation on open issues due to time constraints, it is advisable to utilize the test budget fully. Since the project deadlines and the budget are known beforehand, planning of testing can be done effectively and all the resources for testing can be utilized optimally. And finally, having a mechanism that represents the confidence levels of these vital signs on a scale of 1 to 10 will clearly indicate the quality of testing activity being done and the ability to capture critical defects sooner than later. A simple bar graph with the vital signs on the x-axis and values 1 to 10 on the y-axis will be sufficient for this. If each of the bars is above a certain minimum level that is mutually agreed to by the development and testing teams, then we can safely conclude that most of the testing is and will be effectively done. Though testing is a very critical and essential activity, the load on testing can be reduced by reviewing and inspecting the various development activities and artifacts starting right from the requirement analysis to the code being written in order to detect bugs early in the development life cycle and reduce the impact of a testing phase that may not be completed wholly.


Source: http://stickyminds.com

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!