Interview Questions and Answers for QTP (Quick Test Professional)

Friday, October 28, 2005

The Testing Estimation Process

One of the most difficult and critical activities in IT is the estimation process. I believe that it occurs because when we say that one project will be accomplished in such time by at such cost, it must happen. If it does not happen, several things may follow: from peers’ comments and senior management’s warnings to being fired depending on the reasons and seriousness of the failure.

Here are a few rules for effective testing estimation:

Rule 1: Estimation shall be always based on the software requirements
All estimation should be based on what would be tested, i.e., the software requirements.
In many cases, the software requirements are only established by the development team without any or just a little participation from the testing team. After the specification have been established and the project costs and duration have been estimated, the development team asks how long would take for testing the solution.


Instead of this:
The software requirements shall be read and understood by the testing team, too. Without the testing participation, no serious estimation can be considered.


Rule 2: Estimation shall be based on expert judgment
Before estimating, the testing team classifies the requirements in the following categories:
- Critical: The development team has little knowledge in how to implement it;

- High: The development team has good knowledge in how to implement it but it is not an easy task;
- Normal: The development team has good knowledge in how to implement.

The experts in each requirement should say how long it would take for testing them. The categories would help the experts in estimating the effort for testing the requirements.


Rule 3: Estimation shall be based on previous projects
All estimation should be based on previous projects. If a new project has similar requirements from a previous one, the estimation is based on that project.


Rule 4: Estimation shall be recorded
All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again. The testing team would not need to return for all steps and take the same decisions again. Sometimes, it is an opportunity to adjust the estimation made earlier.

Rule 5: Estimation shall be supported by tools
Tools (e.g a spreadsheet containing metrics) that help to reach the estimation quickly should be used. In this case, the spreadsheet calculates automatically the costs and duration for each testing phase.
Also, a document containing sections such as: cost table, risks, and free notes should be created. This letter should be sent to the customer. It also shows the different options for testing that can help the customer decide which kind of test he needs.

Rule 6: Estimation shall always be verified
Finally, all estimation should be verified. Another spreadsheet can be created for recording the estimations. The estimation is compared to the previous ones recorded in a spreadsheet to see if they have similar trend. If the estimation has any deviation from the recorded ones, then a re-estimation should be made.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, October 21, 2005

Testing - when requirements are changing continuously

Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application's initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to...

- In the project's initial schedule, allow for some extra time to commensurate with probable changes.
- Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are acceptable.
- Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
- Design some flexibility into automated test scripts.
- Focus initial automated testing on application aspects that are most likely to remain unchanged.
- Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs.
- Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans.
- Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Monday, October 17, 2005

Difference between Verification and Validation

Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help.

Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

Difference between Verification and Validation:
Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, October 14, 2005

Difference between Alpha and Beta Testing

Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.

Following alpha testing, "beta versions" of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.

Difference between Alpha and Beta Testing
In-house developers and software QA personnel perform alpha testing.
The public, a few select prospective customers, or the general public performs beta testing.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Thursday, October 13, 2005

How do you execute tests?

Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.

. The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.


. A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool.

. Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.

. After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager's formal acceptance.

. The test team reviews test document problems identified during testing, and update documents where appropriate.

Inputs for this process:
. Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
. Test tools, including automated test tools, if applicable.
. Developed scripts.
. Changes to the design, i.e. Change Request Documents.
. Test data.
. Availability of the test team and project team.
. General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.
. A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.
. Test Readiness Document.
. Document Updates.

Outputs for this process:
. Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.
. Changes to the code, also known as test fixes.
. Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.
. Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.
. Formal record of test incidents, usually part of problem tracking.
. Base-lined package, also known as tested source and object code, ready for migration to the next level.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, October 12, 2005

How do you create a test plan/design?

Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results.

. Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
. Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.

. It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
. Test scenarios are executed through the use of test procedures or scripts.
. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
. Test procedures or scripts include the specific data that will be used for testing the process or transaction.
. Test procedures or scripts may cover multiple test scenarios.
. Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
. Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
. Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
. A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.

Inputs for this process:
. Approved Test Strategy Document.
. Test tools, or automated test tools, if applicable.
. Previously developed scripts, if applicable.
. Test documentation problems uncovered as a result of testing.
. A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.

Outputs for this process:
. Approved documents of test scenarios, test cases, test conditions and test data.
. Reports of software design issues, given to software developers for correction.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, October 11, 2005

How do you create a Test Strategy?

The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.

Inputs for this process:
· A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
· A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
· Testing methodology. This is based on known standards.
· Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
· Requirements that the system can not provide, e.g. system limitations.

Contents of Test Strategy document:
Sections in the test strategy document include:
. A description of the required hardware and software components, including test tools: This information comes from the test environment, including test tool data.
. A description of roles and responsibilities of the resources required for the test and schedule constraints: This information comes from man-hours and schedules.
. Testing methodology: This is based on known standards.
. Functional and technical requirements of the application: This information comes from requirements, change request, technical, and functional design documents.
. Requirements that the system cannot provide, e.g. system limitations.

Outputs for this process:
· An approved and signed off test strategy document, test plan, including test cases.
· Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Monday, October 10, 2005

What is software testing methodology?

A software testing methodology is the use of a three step process of:
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests.

This methodology can be used and molded to your organization's needs. Using this methodology is important in the development and in ongoing maintenance of the customers' applications.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, October 05, 2005

Usability Testing

What is Software Usability?
According to ISO 9241-11 (1998), usability is the “extent to which a product can be used by specified users
to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”



What is Usability Testing?
Usability testing refers to evaluating the ease with which users can learn and use a product. The scope of this phrase, however, can mean different things to different people. Some people refer to usability testing as only the process that employs participants who are representative of the target population to evaluate the degree to which a product meets specific usability criteria (Barnum, 2002). Other practitioners use the term more globally to refer to any technique used to evaluate how easy a product or system is to use. The authors tend to concur with the latter definition because research has shown that participant involvement is a desirable, but not essential, part of evaluating all aspects of system usability.


Cost-Benefit Value of Usability Testing
The basic assumption of a cost-benefit analysis of usability strategies is that a usable system will result in
tangible, measurable benefits. There are many different areas where usability strategies can have cost-benefit
value:

- Development – how usability results in optimization of the development process
- Technical Support – how usability results in technical support savings
- Sales – how usability increases product sales
- Use – how usability positively impacts user performance and satisfaction


Answer to all these questions: A nice article on Usability testing..

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, October 04, 2005

Performance measurement parameters for a tester

Software testing is not about finding bugs. It's about delivering great software. No customer ever said with a straight face, "Wow! You found and fixed 65,000 bugs—that must be really great software!" So, why do many currently use bug counts as a measurement tool? The answer is simple: Bugs are just so darn countable that they are practically irresistible.

They can be counted, tracked, and used for forecasting. And it is tempting to do numerical gymnastics with them, such as dividing them by KLOC (thousand lines of code), plotting their rate over time, or predicting their future rates. But all this ignores the complexities that underlie the bug count. Bugs are a useful barometer of your process, but they can't tell the whole story. They merely help you ask useful questions.
So What Should We Measure?

- How many staff hours are devoted to a project?
- How many bugs did your customer find?
- How many bugs did you prevent?
- How effectively did your tests cover the requirements and the code?
- Finally, a squishy but revealing metric: How many of your own people feel confident about the quality of the product?

More information on Performance measurement parameters for a tester.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!