Interview Questions and Answers for QTP (Quick Test Professional)

Wednesday, August 31, 2005

How objective is your qc?

QC people have a special role in the development process. Part of this role is being involved in different activities throughout the entire development process. Starting from the time when requirements are gathered, when initial design is made, and until the end of each iteration, the QC people must observe and gather information about the system under development. This is crucial to the completeness and correctness of the QC activity. A QC Engineer can’t be expected to jump into the project just before the actual testing starts. She must be aware of the evolution of the requirements and the system itself (including the way it is designed and implemented) in order to create an effective test plan, and build correct tests.

However, at the same time, it is crucial to maintain the objectivity of the QC people. Since the QC engineer is so involved in the development process, it is easy to forget that if she will be too close to the developers, the result of the QC process might not reflect the real quality of the product.

If, for example, something in the requirements is vague, the QC engineer might be tempted to ask the developer for the “correct interpretation” of this issue. But, it is the QC engineer’s role to verify that the developer’s interpretation of the requirements is correct. How can she use the developer’s interpretation to build an objective test?

This is a simple example for what could happen when QC and developers are too close. Doing extensive testing based on what the developers gathered from the requirements, is bound to catch only coding errors. Logical errors in the functionality of the system are most likely to be left in the code under these circumstances.

The way to solve this conflict, is having great self discipline. QC people should be involved in the development, but they should think twice about whom are they talking to when having questions about the requirements of the product. The best person to ask such questions is the same person defining the requirements.

It is important to note that the QC people should not be completely isolated from the development team. The QC engineer could benefit a great deal from discussing the structure of the code and the design details with the developers. This kind of discussion might give her ideas for more test cases, for example. The difference between discussing the structure of the code and discussing the interpretation of the requirements is that verifying the latter is in fact the purpose of the QC process.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, August 30, 2005

Rapid Testing


Rapid testing means testing software faster than you do now, while maintaining or improving your standards of quality. Unfortunately, there is no simple way to achieve rapid testing. The figure shows a somewhat simplistic sketch that represents rapid testing as a structure that is built on a foundation of four components. If any of these components is weak, the effectiveness of testing will be greatly impaired. As illustrated in the figure, the four components that must be optimized for rapid testing are people, integrated test process, static testing, and dynamic testing.

People
As every test manager knows, the right people are an essential ingredient to rapid testing. There are several studies that show productivity differences of 10:1 or more in software developers. The same is true with test engineers—not everyone has the skills, experience, or temperament to be a good test engineer. Rapid testing particularly needs people who are disciplined, flexible, who can handle the pressure of an aggressive schedule, and who are able to be productive contributors through the early phases of the development life cycle.


Integrated Test Process
No matter how good your people may be, if they do not have a systematic, disciplined process for testing, they will not operate at maximum efficiency. The test process needs to be based on sound, fundamental principles, and must be well integrated with the overall software development process.

Static Testing
Static testing is done for the purpose of validating that a work product such as a design specification properly implements all the system requirements, and verifying the quality of the design. Static testing is one of the most effective means of catching bugs at an early stage of development, thereby saving substantial time and cost to the development. It involves inspections, walkthroughs, and peer reviews of designs, code, and other work products, as well as static analysis to uncover defects in syntax, data structure, and other code components. Static testing is basically anything that can be done to uncover defects without running the code. In the experience of the authors, it is an often-neglected tool.

Dynamic Testing
Often when engineers think of testing, they are thinking of dynamic testing, which involves operating the system with the purpose of finding bugs. Whereas static testing does not involve running the software, dynamic testing does. Generally speaking, dynamic testing consists of running a program and comparing its actual behavior to what is expected. If the actual behavior differs from the expected behavior, a defect has been found. Dynamic testing is used to perform a variety of types of tests such as functional tests, performance tests, and stress tests. Dynamic testing lies at the heart of the software testing process, and if the planning, design, development, and execution of dynamic tests are not performed well, the testing process will be very inefficient. Dynamic testing is not only performed by the test team; it should be a part of the development team's unit and integration testing as well.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, August 26, 2005

Introducing Automated Software Testing in your project


First determine if your product is stable and ready for automation.

Then get your testers together along with your immediate manager and do a brainstorm on the requirements needed for a tool.

Then put together a Planning and Scope document of your proposals and get buy-in from senior management.
Take your requirements list and formalize it then send it to the Vendors of the tools which you think might be applicable for you.

Once you have a list of tools, do your own evaluation on the tools, which will require you obtain demo copies of the tools. If you can not get them, they are probably not the tools you want.

Ask Vendors for references, contact the customers to see how they use and like the tool. Ask any that might be local, about an on-site demo of how they are using it.

Finally have the Vendor do a proof of concept. This might cost you, but it's better than having useless shelfware when you abandon the tool because "it just doesn't work for you".

And last, but not least, hire a tool skilled or certified contractor (commercial or independent), to get you up and running with a suite of scripts that are well documented and insure that he passes on the valuable hints and tips of the tool usage.

-- Remember automating is development and must be budgeted and planned for similarly.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Thursday, August 25, 2005

Rules for Automation

Rule #1 of automation: "If you can not logically describe what you are testing, you can't automate it.
"If you are testing specific items, then you may be able to do it. (e.g. Click on a specific button, check for the title of the box, fill in the box, check to make sure that the grayed out button becomes available). But for the UI test to be able to figure out that the background is not grayed out, or the text is too far to the right, or that BOLD letter in the back shouldn't be, that becomes too painful. There are NO AI software that can look at a screen and artificially figure out if it makes SENSE. Every aspect of the screen has to be described all the way down to the pixels.

Rule 2: COST
Most organizations give up on automation because they don't realize that certain things are cheap to automate while others are expensive and end up trying to automate things which are painful.For example..."Global Change" in UI, where a single variable name changes multiple strings all over the place. Most UI test script will break left and right. Maintainig the scripts, as well as fighting with development from rapid changes will eventually get automation kicked out.

Rule 3: Everyone needs to run it.
Having a test tool that only testers can use will eventually be a losing game. The only way to promote automation is to have EVERYONE use it. Therefore, tools like Silk, Mercury will never be able to compete with tools written using things like Java (junit), and perl. More the tools are used, more people will invest. More people invest, the better tool gets, and more people will use it.

Rule 4: Test software is a software in themselves.
Have a mercury or Silk rep come over to your office and have them demo against your software on the fly. I will give you 25% chance that their version will crash or not work exactly as advertized. Working for various companies, I've seen these companies bring in their "Canned" demo that worked flawlessly. Then after asking them to demo on the real product (web), their software had tons of problems, including crashes. IMHO:It is better the stay with products that have very simple interface. So, who tests their software?

Rule 5: Guns don't kill people, people kill people
If you have a noob automation personnel write a code, you will get a piece of junk. I can state in fact that most of those who have asked these questions have greater then 90% chance of failing. Success of automation really depends on people. Right people will know what to automate, and how it should be automated.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, August 24, 2005

What if there isn't enough time for thorough testing?

Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience.

The checklist should include answers to the following questions:

· Which functionality is most important to the project's intended purpose?
· Which functionality is most visible to the user?
· Which functionality has the largest safety impact?
· Which functionality has the largest financial impact on users?
· Which aspects of the application are most important to the customer?
· Which aspects of the application can be tested early in the development cycle?
· Which parts of the code are most complex and thus most subject to errors?
· Which parts of the application were developed in rush or panic mode?
· Which aspects of similar/related previous projects caused problems?
· Which aspects of similar/related previous projects had large maintenance expenses?
· Which parts of the requirements and design are unclear or poorly thought out?
· What do the developers think are the highest-risk aspects of the application?
· What kinds of problems would cause the worst publicity?
· What kinds of problems would cause the most customer service complaints?
· What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, August 23, 2005

Most Critical Web Application Security Vulnerabilities

1. Unvalidated Input
Information from web requests is not validated before being used by a web application. Attackers can use these flaws to attack backend components through a web application.

2. Broken Access Control
Restrictions on what authenticated users are allowed to do are not properly enforced. Attackers can exploit these flaws to access other users' accounts, view sensitive files, or use unauthorized functions.

3. Broken Authentication and Session Management
Account credentials and session tokens are not properly protected. Attackers that can compromise passwords, keys, session cookies, or other tokens can defeat authentication restrictions and assume other users' identities.

4. Cross Site Scripting (XSS) Flaws
The web application can be used as a mechanism to transport an attack to an end user's browser. A successful attack can disclose the end user?s session token, attack the local machine, or spoof content to fool the user.

5. Buffer Overflows
Web application components in some languages that do not properly validate input can be crashed and, in some cases, used to take control of a process. These components can include CGI, libraries, drivers, and web application server components.

6. Injection Flaws
Web applications pass parameters when they access external systems or the local operating system. If an attacker can embed malicious commands in these parameters, the external system may execute those commands on behalf of the web application.

7. Improper Error Handling
Error conditions that occur during normal operation are not handled properly. If an attacker can cause errors to occur that the web application does not handle, they can gain detailed system information, deny service, cause security mechanisms to fail, or crash the server.

8. Insecure Storage
Web applications frequently use cryptographic functions to protect information and credentials. These functions and the code to integrate them have proven difficult to code properly, frequently resulting in weak protection.

9. Denial of Service
Attackers can consume web application resources to a point where other legitimate users can no longer access or use the application. Attackers can also lock users out of their accounts or even cause the entire application to fail.

10. Insecure Configuration Management
Having a strong server configuration standard is critical to a secure web application. These servers have many configuration options that affect security and are not secure out of the box.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Saturday, August 20, 2005

Standards For Testing An Application

The General testing standards for testing any type of application based on various factors are listed below:

1. Look & Feel
-> Uniformity in terms of Content, Title, Position (Should be displayed at the center of any User Interface (UI)) of message boxes.
-> Enabling & Disabling of menu items/icons according to the user security.
-> Ensure menu items are invoked as set in user security. If the service is not available for testing then that particular menu item should be disabled.
-> Navigation using ‘tab’ key should be proper across fields and should move from right to left, then top to bottom.
-> Scrolling effect of Vertical & Horizontal scrollbars should be proper.
-> Alignment of controls should be proper.
-> Spacing between controls should be proper.
-> Ensure the uniformity in font (Type, Size, Color)
-> In case of Multi line text box, Press ‘Enter’ key to go for the next line.
-> Ensure 'Backspace' & 'Space bar' are working properly, wherever applicable.
-> In case of List / Combo boxes, press ALT+DOWN to display the list values and use ‘Down Arrow’ key for selection.
-> ‘Esc’ key should activate ‘Cancel’ button and ‘Enter’ key should activate ‘OK’ button.
-> Check for the spelling in Message Box, Titles, Help files and Tool Tip.
-> In case of reports check for the proper display of column headers on different Zooming effect.
-> Check ToolTip text is provided for all icons in the UI.

2. Functionality Testing
-> Check for the functionality of the application. The entire flow of the application has to be checked.
-> In functionality testing both Positive and Negative testing is done.

2.1.Positive Testing
-> Check for positive functionality of the application.
-> Check for field validation using positive values within the permissible limits.

2.2.Negative Testing
-> Give numbers in char fields and vice versa
-> Give only numbers in the alphanumeric fields
-> Know the permissible range for each field and check using values exceeding the limits. Use Equivalence Partitioning and Boundary Value Analysis techniques for deciding on the test data.
-> Without giving values for mandatory fields, save data.
-> Random clicking and continuous tab out (especially in grids for application error).
-> Check for the space and updating with blank fields wherever applicable.
-> Maximize and minimize the screens and check for the toolbar§ display.

3. Menu Organization
-> Simultaneous opening of screens.

4. Help files
-> F1 should invoke context sensitive help files.

5. User Interface Traversal
-> For all mouse driven operation there should be a keyboard function , Short cut keys & Alternate Keys where ever applicable in menu.

6. Date Format
-> The application should support various date formats in regional settings.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, August 19, 2005

Defect Taxonomies

Categories of Defects

All software defects can be broadly categorized into the below mentioned types:
• Errors of commission: something wrong is done
• Errors of omission: something left out by accident
• Errors of clarity and ambiguity: different interpretations
• Errors of speed and capacity

However, the above is a broad categorization; below is a list of varied types of defects that can be identified in different software applications:
- Conceptual bugs / Design bugs
- Coding bugs
- Integration bugs
- User Interface Errors
- Functionality
- Communication
- Command Structure
- Missing Commands
- Performance
- Output
- Error Handling Errors
- Boundary-Related Errors
- Calculation Errors
- Initial and Later States
- Control Flow Errors
- Errors in Handling Data
- Race Conditions Errors
- Load Conditions Errors
- Hardware Errors
- Source and Version Control Errors
- Documentation Errors
- Testing Errors

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Test Development Life Cycle



Usually, Testing is considered as a part of the System Development Life Cycle, but it can be also termed as Test Development Life Cycle or Software Testing Life Cycle.
The diagram does not depict where and when you write your Test Plan and Strategy documents. But, it is understood that before you begin your testing activities these documents should be ready. Ideally, when the Project Plan and Project Strategy are being made, this is the time when the Test Plan and Test Strategy documents are also made.


More on phases in Software Test Life Cycle (or Test Development Life Cycle)..

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, August 17, 2005

Bug Count Vs Test Coverage

Bug counts on a project speak volumes about the quality of testing for a particular product and how vigorous the test team is working to "assure quality". Bug counts are invariably a primary area of test metrics that are reported to management. What is the rationale behind drawing so much attention to the number of bugs being found through the course of a project?

I have heard it said that QA’s job is to find bugs. If this is the assumption of management, bug counts will be an important indicator to them that QA is doing its job. They expect to see bug counts rise dramatically in the early stages of testing, and they expect to see the find rate decrease as the project comes to an end. These are management’s statistical expectations when they believe bug counts are a metric to assess quality of testing.

If high bug counts, then, are an indicator that quality is going up, low bug counts can be seen as an indicator that something just isn’t right with the testing process. Management might imagine different problems that are preventing bugs from being found:

  • Test coverage isn’t complete; maybe major areas of functionality aren’t being tested.
  • Testing is only scratching the surface of all functionality, not digging in to the real complexities of the code.
  • Our testers just aren’t that good.

Management might see red flags when bug counts are low, but a number of causes may contribute to low bug counts. On the second or third iteration of a product, the bulk of the defects may have been found on an earlier cycle. Or especially good development practices may have been implemented: strong unit testing, code reviews, good documentation, and not working developers to death. These are supposed to result in lower bug counts.

Ultimately, however, QA will justify low bug counts when it can justify its test coverage. If the product under test is being tested with thorough coverage, the bug count should be treated only as a supporting statistic, not the primary one. After all, we all know that a quality product hasn’t been reached when a certain bug count is reached. Quality is achieved when test coverage is maximized and bug finds decrease to a minimum.

There are several things you can do when bug counts are low and management is questioning the quality of testing:

  1. Take stock. Call a meeting with your test team, go through the areas of test, possibly even some test cases themselves, and get a general feel for how much test coverage you really have. Maybe you’ll discover that an area of test really is being missed. Perhaps there is some misunderstanding of who should be testing what and some functionality fell between the cracks. Brainstorm more testing methods and techniques, and generate ideas of how your team can broaden the testing efforts. Before going to other groups or departments, get a solid understanding of where your team is in the process.
  2. Talk to development. Go over your current test coverage with development, and see if they have any input on areas you might also investigate. Ask them what the trouble spots are, if they can suggest lower-level tests that may ferret out more bugs, and possibly even conduct a test case review with them. On my current project, we send out the test cases of all features to the appropriate developer for review. Though many times developers can be reluctant to help testers, demonstrate to them that it is in their best interest that we thoroughly test their code—if it’s solid, they have nothing to worry about.
  3. Communicate with management. When bug counts are low, use test coverage to justify them. This doesn’t mean dismissing the fact that the bug count is low. It means using the bug count as an indicator to do some analysis into the testing practices you are doing, and verifying that high test coverage is being achieved. If it is, explain to management your findings. Demonstrate by solid metrics that you are performing thorough testing, that you can’t force bug counts to go up, and that maybe—just maybe—a low bug count means you’ve got a quality product on your hands!

One thing to bear in mind: while you can use the above methods during testing cycles to understand and cope with a low bug count, the ideas are still applicable before testing even begins, while test cases are being written for a project, and while development is still in full swing. Good test coverage is something to be planned ahead of time, and having gone through the effort of mapping coverage and functional test cases early in the project, you will prevent yourself from spending valuable testing cycles repeating tasks.

While low bug counts can cause people in both development and management to question the effectiveness of the testing, do not be defensive about it. Use it as a trigger to prove what you should already know—your testing efforts are appropriate, effective, and your coverage is maximized. Don’t let your bug counts do the talking—your test coverage should say it all.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, August 16, 2005

When to start and stop Testing?

Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product. Test data sets must be derived and their correctness and consistency should be monitored throughout the development process.

If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

Testing Activities in Each Phase
The following testing activities should be performed during the phases
1. Requirements Analysis

- Determine correctness
- Generate functional test data.
2. Design

- Determine correctness and consistency
- Generate structural and functional test data.
3. Programming/Construction

- Determine correctness and consistency
- Generate structural and functional test data
- Apply test data
- Refine test data.
4. Operation and Maintenance

- Retest.

Details of testing in all phases:


1. Requirements Analysis
The following test activities should be performed during this stage:

1.1 Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis an d test data generation.
The requirements statement should record the following information and decisions:
a. Program function - What the program must do?
b. The form, format, data types and units for input.
c. The form, format, data types and units for output.
d. How exceptions, errors and deviations are to be handled.
e. For scientific computations, the numerical method or at least the required accuracy of the solution.
f. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).

Deciding the above issues is one of the activities related to testing that should be performed during this stage.

1.2 Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data.
In addition, following should also be included in the data set:
(1) boundary values
(2) any non-extreme input values that would require special handling.

The output domain should be treated similarly.
Invalid input requires the same analysis as valid input.

1.3 The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.

2. Design
The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.

The design document should contain:
· Principal data structures.
· Functions, algorithms, heuristics or special techniques used for processing.
· The program organization, how it will be modularized and categorized into external and internal interfaces.
· Any additional information.

Here the testing activities should consist of:
- Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.

- Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.

- Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.

- Re-examination and refinement of the test data set generated at the requirements analysis phase.

The first two steps should also be performed by some colleague and not only the designer/developer.

3. Programming/Construction
Here the main testing points are:

- Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.

- Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.

- Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.

- Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.

- Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.

- Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.

- Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.
The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.
The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.

4. Operations and maintenance
Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.



When to stop testing?
"When to stop testing" is one of the most difficult questions to a test engineer. The following are few of the common Test Stop criteria:

- All the high priority bugs are fixed.
- The rate at which bugs are found is too small.
- The testing budget is exhausted.
- The project duration is completed.
- The risk in the project is under acceptable limit.

Practically, the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process, we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

- Measuring Test Coverage.
- Number of test cycles.
- Number of high priority bugs.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Monday, August 15, 2005

Types of Black Box Testing

Acceptance testing

Alpha testing


Beta testing

Comparison testing

Computability testing

End-to-end testing

Functional testing

Incremental integration testing

Install/uninstall testing

Integration testing

Load testing

Performance testing

Recovery testing

Regression testing

Sanity testing

Security testing

System testing

Usability testing

Visit http://quality-assurance-software-testing.blogspot.com/2005/07/testing-methodologies.html for brief information on all of these testing types.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, August 12, 2005

Software Testing - 10 Rules

1. Test early and test often.
2. Integrate the application development and testing life cycles. You'll get better results.
3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results.

4. Develop a comprehensive test plan; it forms the basis for the testing methodology.
5. Use both static and dynamic testing.
6. Define your expected results.
7. Understand the business reason behind the application. You'll write a better application and better testing scripts.
8. Use multiple levels and types of testing (regression, systems, integration, stress and load).
9. Review and inspect the work, it will lower costs.
10. Don't let your programmers check their own work; they'll miss their own errors.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, August 10, 2005

Ten tips to write better bug reports

Applying the following ten tips will help you achieve better bug/defect reports:

1. Structure
A tester who uses a deliberate, careful approach to testing, and takes careful notes, tends to have a good idea of what’s going on with the system under test. When failures occur, she knows when the first signs of failure manifested themselves.

2. Reproduce
The tester should check reproducibility of a failure before writing a bug report. If the problem doesn’t recur, she should still write the bug report, but she must note the sporadic nature of the behavior. A good rule of thumb is three attempts to recreate the failure before writing the report. Documenting a clean set of steps to reproduce the problem addresses the issue of reproducibility head-on.

3. Isolate
After reproducing the failure, the tester should then proceed to isolate the bug. This refers to changing certain variables, such as system configuration, that may alter the symptom of the failure. This information gives developers a head start on debugging.

4. Generalize
After the tester has an isolated and reproducible case, she should try to generalize the problem. Does the same failure occur in other modules or locations? Can she find more severe occurrences of the same fault?

5. Compare
If a tester has previously verified the underlying test condition in the test case that found the bug, the tester should check these prior results to see if the condition passed in earlier runs. If so, then the bug is likely a case of regression, where a once-working feature now fails. Note that test conditions often occur in more than one test case, so this step can involves more work than just checking past runs of the same test case. Also, if you have a reference platform, repeat the test there and note result.

6. Summarize
The first line of the bug report, the failure summary, is the most critical. The tester should spend some time thinking through how the failure observed will affect the customer. This not only allows the tester to write a bug report that hooks the reader and communicates clearly to management, but also helps with setting bug report priority.

7. Condense
With a first draft of the bug report written, the tester should reread it, focusing on eliminating extraneous steps or words. Cryptic commentary is, of course, not the goal, but the report should not wear out its welcome by droning on endlessly about irrelevant details or steps which need not be performed to repeat the failure.

8. Disambiguate
In addition to eliminating wordiness, the tester should go through the report to make sure it is not subject to misinterpretation. Some words or phrases are vague, misleading, or subjective, and should be avoided. Clear, indisputable statements of fact are the goal.

9. Neutralize
Being the bearer of bad news presents the tester with the challenge of delicate presentation. Bug reports should be fair-minded in their wording. Attacking individual developers, criticizing the underlying error, attempting humor, or using sarcasm can create ill will with developers and divert attention from the bigger goal, increasing the quality of the product. The cautious tester confines her bug reports to statements of fact.

10. Review
Once the tester feels the bug report is the best one he can write, he should submit it to one or more test peers for a review. The reviewing peers should make suggestions, ask clarifying questions, and even, if appropriate, challenge the tester’s assertion that the behavior is buggy. The test team should only submit the best possible bug report, given the time constraints appropriate to the priority of the bug. A bug report should be an accurate, concise, thoroughly-edited, well-conceived, high quality technical document. The test team needs to focus on the task of writing bug reports, and the test leads and manager must make it clear to each member of the test team that writing good bug reports is a primary job responsibility.

Quality indicators for a well-tuned bug reporting process include:
- Clarity to management, particularly at the summary level;
- Utility to the development team, primarily in terms of giving the developer all the information needed to effectively debug the problem;
- Brevity of the bug lifecycle from opened to closed, reducing cycles where developers return poor quality reports for more information, leading to tester rework.

Improving the bug reporting process does require an effort, but provides significant payoffs.

- First, a crisp process improves the test team’s communications with senior and peer management, which enhances the team’s credibility and professional standing, and can encourage management to invest more resources in testing.
- Second, the smooth handoff to developers promotes positive relationships.
- Third, shorter bug lifecycles are more efficient, so the time invested up front writing a good bug report is repaid in time not wasted rewriting a poor bug report.

These payoffs help the development process achieve better product quality through effective communication and efficient workflows.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Benefits of Automated Testing

Manually testing software is a time-consuming and often tedious process, one which cannot guarantee consistency of testing across releases and across platforms. Additionally, time constraints often do not afford us the luxury of being able to manually test and retest our applications before they are released. Inevitably the question remains, “Did any critical bugs go undetected?”

Automating your testing leads to a better use of your resources. Skilled testers are freed up to put more effort into designing better tests while machines that would otherwise lie idle overnight can be used to run unattended automated tests.

The benefits of automating software testing are many:
• Providing more coverage of regression testing.
• Reducing the elapsed time for testing, getting your product to market faster.
• Improving productivity of human testing.
• Improving the re-usability of tests.
• Providing a detailed test log.

The layered approach
The most common approach to any testing is the layered approach. The layered approach includes three types of tests:

1. Operability Tests which examine each object, verifying specific properties of the object such as: state, size, caption and contents.

2. Functionality Tests which examine the behavior of a group of objects that together provide a specific feature to the end user. This includes looking at a dialog as a collection of objects and verifying the functionality provided. It can also include verifying the interaction between objects. For example, verifying that a text box is enabled when a check box is checked.

3. System Tests which examine how the application under test (AUT) interacts with other software or hardware products within the software environment.

Other types of tests
Other types of tests that may be performed include:

1. Regression Tests which run existing tests on new versions of a program.

2. Error Tests which verify the system’s response to error conditions.

3. Stress Tests which measure the system’s response to repetitive or large amounts of data.

4. White-Box vs. Black-Box Tests Where white-box testing places the focus on the internal structure of the software (the code) while black-box testing views the software from the end-user perspective and is unaware of the underlying code.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Sunday, August 07, 2005

Quality Control Issues and Testing

Why Testing/Quality Assurance?
Rising customer expectations for fault free requirements-exact software has increased awareness of the importance of software testing as a critical activity. The payoffs from QA/testing are in form of customer satisfaction, reduced costs taking quality in consideration, efficient processes.

Who is responsible for testing?
Contradicting the myths, quality is everyone's job who is involved in the SDLC of the software.

Cost, Quality and Time are tightly fastened with each other. Change in any of these two values are going to make an impact on the third factor.

Quality and Cost together follow the laws of diminisence i.e. upto certain extent, quality linearly increases with cost but after some point, there is exponential growth in the cost for acquiring quality beyond that critical point.
At least the optimal quality should be achieved but maximum quality is not always desirable.

Proper way of testing
a. Find Test Equivalence class.
b. Add Test Cases for each test class.

Test Equivalence class is a property which will be constant for a group of several other cases. Different test cases can be written within the domain of this test equivalence class. This helps the tester in classification of his work, which has a significant impact in solving the complexity of the various independent testing paths.

Chaos Theory
The software development and the testing of the software is getting unpredictable these days. Small changes can cause a project to exponentially diverge from stability.

Tester should be able to prioritize, rationalize each test cases within every test plans so the Project manager feels confident and comfortable at tester’s part.

Ideal Testing Procedure
It is an incremental process which can be represented in concentric Circles.
a) First the build test(smoke testing) needs to be done at the arrival of new build.
b) On the successful completion of this test, regression or other testing can be done depending upon the requirement.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Saturday, August 06, 2005

Testing Interview Questions

Here are some questions you might be asked on a job interview for a testing opening:

1. Why did you ever become involved in QA/testing?
2. What is the testing lifecycle and explain each of its phases?
3. What is the difference between testing and Quality Assurance?
4. What is Negative testing?
5. What was a problem you had in your previous assignment (testing if possible)? How did you resolve it?
6. What are two of your strengths that you will bring to our QA/testing team?
7. How would you define Quality Assurance?
8. What do you like most about Quality Assurance/Testing?
9. What do you like least about Quality Assurance/Testing?
10. What is the Waterfall Development Method and do you agree with all the steps?
11. What is the V-Model Development Method and do you agree with this model?
12. What is the Capability Maturity Model (CMM)? At what CMM level were the last few companies you worked?
13. What is a "Good Tester"?
14. Could you tell me two things you did in your previous assignment (QA/Testing related hopefully) that you are proud of?
15. List 5 words that best describe your strengths.
16. What are two of your weaknesses?
17. What methodologies have you used to develop test cases?
18. In an application currently in production, one module of code is being modified. Is it necessary to re- test the whole application or is it enough to just test functionality associated with that module?
19. Define each of the following and explain how each relates to the other: Unit, System, and Integration testing.
20. Define Verification and Validation. Explain the differences between the two.
21. Explain the differences between White-box, Gray-box, and Black-box testing.
22. How do you go about going into a new organization? How do you assimilate?
23. Define the following and explain their usefulness: Change Management, Configuration Management, Version Control, and Defect Tracking.
24. What is ISO 9000? Have you ever been in an ISO shop?
25. When are you done testing?
26. What is the difference between a test strategy and a test plan?
27. What is ISO 9003? Why is it important
28. What are ISO standards? Why are they important?
29. What is IEEE 829? (This standard is important for Software Test Documentation-Why?)
30. What is IEEE? Why is it important?
31. Do you support automated testing? Why?
32. We have a testing assignment that is time-driven. Do you think automated tests are the best solution?
33. What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us?
34. Are reusable test cases a big plus of automated testing and explain why.
35. Can you build a good audit trail using Compuware's QACenter products. Explain why.
36. How important is Change Management in today's computing environments?
37. Do you think tools are required for managing change. Explain and please list some tools/practices which can help you managing change.
38. We believe in ad-hoc software processes for projects. Do you agree with this? Please explain your answer.
39. When is a good time for system testing?
40. Are regression tests required or do you feel there is a better use for resources?
41. Our software designers use UML for modeling applications. Based on their use cases, we would like to plan a test strategy. Do you agree with this approach or would this mean more effort for the testers.
42. Tell me about a difficult time you had at work and how you worked through it.
43. Give me an example of something you tried at work but did not work out so you had to go at things another way.
44. How can one file compare future dated output files from a program which has change, against the baseline run which used current date for input. The client does not want to mask dates on the output files to allow compares.
- Answer-Rerun baseline and future date input files same # of days as future dated run of program with change. Now run a file compare against the baseline future dated output and the changed programs' future dated output.

Interviewing Suggestions
1. If you do not recognize a term ask for further definition. You may know the methodology/term but you have used a different name for it.
2. Always keep in mind that the employer wants to know what you are going to do for them, with that you should always stay/be positive.

Preinterview Questions
1. What is the structure of the company?
2. Who is going to do the interview-possible background information of interviewer?
3. What is the employer's environment (platforms, tools, etc.)?
4. What are the employer's methods and processes used in software arena?
5. What is the employer's philosophy?
6. What is the project all about you are interviewing for-as much information as possible.
7. Any terminologies that the company may use.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, August 05, 2005

Difference between Load and Stress Testing.

One of the most common misuse of terminology is treating "Load Testing" and "Stress Testing" as synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly "load tested" nor subjected to a meaningful stress test.

1. Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts, etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

2. Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such loads is in support of software reliability testing and in performance testing. The term "load testing" by itself is too vague and imprecise to warrant use. For example, do you mean representative "load", "overload", "high load" etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions suffer (application-specific) excessive delay.

3. A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle.
In this usage, "load testing" is merely testing at the highest transaction arrival rate in performance testing.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Thursday, August 04, 2005

How does one make a good tester ?

For starters, formal training and gaining experience in both:
1. Manual testing, and
2. Quality Assurance

Technical:
1. Understanding of systems and software
2. Programming skills are helpful

Skills a good SQA person has to have:
- 'test to break' attitude,
- Ability to take the point of view of the customer,
- Strong desire for quality,
- Attention to detail.
- Tact and diplomacy are useful in maintaining a cooperative relationship with developers,
- Ability to communicate with both technical (developers) and non-technical (customers, management) people
- Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming.
- Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
- You must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization.
- Ability to understand various sides of issues
- Patience
- Ability to find problems as well as to see 'what's missing' is important for inspections and reviews.
- Tenacity
- Resourcefulness
- Team spirit

- Salesmanship
- Ability to learn quickly
- Ability to research
- Perseverance

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Gray Box Testing

Even though you probably don't have full knowledge of the internals of the product you test, a test strategy based partly on internals is a powerful idea. This can be called as gray box testing.
The concept is simple:
If you know something about how the product works on the inside, you can test it better from the outside.

This is not to be confused with white box testing, which attempts to cover the internals of the product in detail. In gray box mode, you are testing from the outside of the product, just as you do with black box, but your testing choices are informed by your knowledge of how the underlying components operate and interact.

Gray box testing is especially important with Web and Internet applications, because the Internet is built around loosely integrated components that connect via relatively well-defined interfaces. Unless you understand the architecture of the Net, your testing will be skin deep. Hung Nguyen's Testing Applications on the Web (2000) is a good example of gray box test strategy applied to the Web.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, August 03, 2005

Black box/white box testing

Black-box and white-box are test design methods.

Black-box test design treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box.

White-box test design allows one to peek inside the "box", and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and "structural". Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this "gray-box" or "translucent-box" test design.

It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, but this is because testers usually don't have well-defined requirements at the unit level to validate.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, August 02, 2005

Web Applications Testing

Web-based applications present new challenges, these challenges include:
- Short release cycles;
- Constantly Changing Technology;
- Possible huge number of users during initial website launch;
- Inability to control the user's running environment;
- 24-hour availability of the web site.

The quality of a website must be evident from the Onset. Any difficulty whether in response time, accuracy of information, or ease of use-will compel the user to click to a competitor's site. Such problems translate into lost of users, lost sales, and poor company image.

To overcome these types of problems, use the following techniques:
1. Functionality Testing
Functionality testing involves making Sure the features that most affect user interactions work properly. These include :
· forms
· searches
· pop-up windows
· shopping carts
· online payments

2. Usability Testing
Many users have low tolerance for anything that is difficult to use or that does not work. A user's first impression of the site is important, and many websites have become cluttered with an increasing number of features. For general-use websites frustrated users can easily click over a competitor's site.

Usability testing involves following main steps
· identify the website's purpose;
· identify the indented users;
· define tests and conduct the usability testing
· analyze the acquired information

3. Navigation Testing
Good Navigation is an essential part of a website, especially those that are complex and provide a lot of information. Assessing navigation is a major part of usability Testing.

4. Forms Testing
Websites that use forms need tests to ensure that each field works properly and that the forms posts all data as intended by the designer.

5. Page Content Testing
Each web page must be tested for correct content from the user perspective for correct content from the user perspective. These tests fall into two categories: ensuring that each component functions correctly and ensuring that the content of each is correct.

6. Configuration and Compatibility Testing
A key challenge for web applications is ensuring that the user sees a web page as the designer intended. The user can select different browser software and browser options, use different network software and on-line service, and run other concurrent applications. We execute the application under every browser/platform combination to ensure the web sites work properly under various environments.

7. Reliability and Availability Testing
A key requirement o a website is that it Be available whenever the user requests it, after 24-hours a day, every day. The number of users accessing web site simultaneously may also affect the site's availability.

8. Performance Testing
Performance Testing, which evaluates System performance under normal and heavy usage, is crucial to success of any web application. A system that takes for long to respond may frustrate the user who can then quickly move to a competitor's site. Given enough time, every page request will eventually be delivered. Performance testing seeks to ensure that the website server responds to browser requests within defined parameters.

9. Load Testing
The purpose of Load testing is to model real world experiences, typically by generating many simultaneous users accessing the website. We use automation tools to increases the ability to conduct a valid load test, because it emulates thousand of users by sending simultaneous requests to the application or the server.

10. Stress Testing
Stress Testing consists of subjecting the system to varying and maximum loads to evaluate the resulting performance. We use automated test tools to simulate loads on website and execute the tests continuously for several hours or days.

11. Security Testing
Security is a primary concern when communicating and conducting business- especially sensitive and business- critical transactions - over the internet. The user wants assurance that personal and financial information is secure. Finding the vulnerabilities in an application that would grant an unauthorized user access to the system is important.


Following is the strategy we used in one of my projects:
1. From the requirements, list the priorities of testing.
2. Use a Link Checker to check all the broken text and image links. This will alert you on missing or broken links but not links that have been pointed to the unintended locations/files. But ignore such links for now this is just one of the cursory checks. Report the results.
3. Collect as many scenarios of real users browsing your site. If you are testing an already existing site you can get them from the site Stats. These will probably cover all your existing user base. If your site needs to target additional user base that is a whole other task.
4. Test using the scenarios.
5. From your testing priorities, list the test cases not touched in your previous test scenarios. If you want a complete list or have time to create one, just go ahead or else you can also list them and note the result from your tests earlier in here. We just listed a short description of each test case, it's input and output. It'll be helpful to record your results if you've a column in there to record your results when testing.
6. Start testing using the test cases and record the results.
7. Prioritize and Report the defects. If you don't have time for testing, there might not be sufficient time to fix them all and re-test the defects either. Prioritizing them will help fix at least the major bugs and/or issues. The rest will be fixed or ignored depending on their priority and your project deadlines.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!