Interview Questions and Answers for QTP (Quick Test Professional)

Thursday, December 01, 2005

Levels of testing

A step into the world of software testing might seem like a nightmare, with memories of unending test scripts
providing weeks of tedious repetitive testing and mountains of problem reports. Even after apparently exhaustive testing, bugs are still found, there is rework to carry out and inevitably yet more testing. But with careful planning, software testing can go like a dream.

Levels of testing:
The key to successful test strategies is picking the right level of testing at each stage in a project. The first point to take into account is that testing will not find all bugs and so a decision has to be made about the level of coverage required. This will be dependent on the software application and the level of quality required by the end users. A safety critical system such as a commercial fly-by-wire will clearly need a far higher degree of testing than a cheap games package for the domestic market.


This sort of decision needs to be made at an early stage in a project and will be documented in an overall software test plan. Typically this general plan will be drafted at the same stage that the requirements are catalogued and would be finalized once functional design work has been completed. It is now that confusion over the different levels of testing often arises. The adoption of the simple philosophy that each design document requires a corresponding test document can cut right through this confusion. The definition of the requirements will therefore be followed closely by the preparation of a test document - the Acceptance Tests.

Acceptance Tests are written using the Requirements Specification and apart from the overall test plan, are the only documents, which would be used in writing the Acceptance Tests. The point of Acceptance Tests is to demonstrate that the requirements have been met and therefore there needs to be no other input. It follows that the Acceptance Tests will be written at a fairly abstract level, as there is little detail at this stage of how the software will operate. An important point to bear in mind for all testing is that the tests should be written by a person who has some degree of independence from the project. This may be achieved by employing an external software consultant, or it may be adequate to use someone other than the author of the Requirements Specification. This will help to remove ambiguity from the design documents and to ensure that the tests reflect the design, rather than testing that has already been performed.

Following the classic software lifecycle, the next major document to be produced is the Functional Specification (External Design or Logical Design), which provides the first translation of the Requirements into a working system, and it is here that the system tests are written. As for the overall system it would be usual to write a System Test Plan, which will describe the general approach to how the system testing will be carried out. This might define the level of coverage (e.g. the level of validation testing to be carried out on user enterable fields) and the degree of regression testing to be included. In any case, it would normally only exercise the system as far as described in the Functional Specification. System Tests will be based on the Functional Specification, and this will often be the only document used as input.

The lowest level of testing is module level and this is based on the individual module designs. This testing will normally be the most detailed and will usually include areas such as range checking on input fields, exercising of output messages and other module level details. Following from the previous analogies, the module tests will be based on the individual module designs. Splitting the tests into these different segments helps to keep re-testing to a minimum. A problem discovered in one module during System Testing will simply mean re-executing that module test, followed by the System Tests.

Other levels:
Many other types of testing are available (e.g. installation, load, performance), though all will normally fall under one of the previous categories. One of the most important is Regression Testing, which is used to confirm that a new release of software has not regressed (i.e. lost functionality). In an extreme case, (which might be the product of a poorly planned test program) all previous tests must be re-executed on the new system. This may be acceptable for a system with automated testing capabilities, but for an interactive system it could mean an extensive use of test staff. For a software release it may be perfectly adequate to re-execute a subset of the module and system tests, and to write a new acceptance test based only on the requirements for this release.

Conclusion
Testing can make or break a project - it is vital that testing is planned to ensure that adequate coverage is
achieved. The emphasis here is adequate - too much testing can be uneconomic, too little can result in poor quality software.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

2 Comments:

At 3:05 PM, December 05, 2005, Anonymous Anonymous said...

Good "checklist" for project managers and sponsors - hopefully us testers are providing the details they'll need to properly plan for testing. 2 points I'd make:
1) I find that acceptance testing is best done by those who will actually use the software, and it's best if they've had some level of involvement during the project so they're not coming into testing cold (those who participated during requirements gathering, for example). I agree that it can't be someone too closely involved with the project - a peer of mine is on a project where developers are doing acceptance testing! I wonder how objective they'll be :)
2) Don't forget about performance/stress/volume testing. Exact needs will differ based on the project, but it's always best to plan to do something rather than find out after deployment.

 
At 11:02 AM, November 01, 2007, Blogger . said...

Nice post. All levels of testing are explained in a very good way.

 

Post a Comment

<< Home