Interview Questions and Answers for QTP (Quick Test Professional)

Tuesday, December 20, 2005

Security Testing

Security Testing:
Testing which confirms that the program can access to authorized personnel and that the authorized personnel can access the functions available to their security level. Security testing is testing how well the system is protected against unauthorized internal or external access, or willful damage.

The purpose of security testing is to determine how well a system protects against unauthorized internal or external access or willful damage.

Types of Security Testing:
1. Vulnerability Scanning
2. Security Scanning
3. Penetration Testing
4. Risk Assessment
5. Security Auditing
6. Ethical Hacking

7. Posture Assessment & Security Testing

Vulnerability Scanning is using automated software to scan one or more systems against known vulnerability signatures. Vulnerability analysis is a systematic review of networks and systems, that determines the adequacy of security measures, identifies security deficiencies, and evaluates the effectiveness of existing and planned safeguards. It justify the resources required to scope of organization's perimeter security or alternatively give you the piece of mind that your network is secure. Examples of this software are Nessus, Sara, and ISS.

Security Scanning is a Vulnerability Scan plus Manual verification. The Security Analyst will then identify network weaknesses and perform a customized professional analysis.

Penetration Testing takes a snapshot of the security on one machine, the "trophy". The Tester will attempt to gain access to the trophy and prove his access, usually, by saving a file on the machine. It is a controlled and coordinated test with the client to ensure that no laws are broken during the test. This is a live test mimicking the actions of real life attackers. Is the security of IT systems up to the task? Conducting a penetration test is a valuable experience in preparing your defenses against the real thing.

Risk Assessment involves a security analysis of interviews compiled with research of business, legal, and industry justifications.

Security Auditing involves hands on internal inspection of Operating Systems and Applications, often via line-by-line inspection of the code. Thorough and frequent security audits will mean your network is more secure and less prone to attack.

Ethical Hacking is basically a number of Penetration Tests on a number of systems on a network segment.

Posture Assessment & Security Testing combine Security Scanning, Ethical Hacking and Risk Assessments to show an overall Security Posture of the organization. It needs a methodology to follow.

The 6 testing sections include:
1. Information Security
2. Process Security
3. Internet Technology Security
4. Communications Security
5. Wireless Security
6. Physical Security

The Information Security section is where an initial Risk Assessment is performed. All pertinent documentation is compiled and analyzed to compute "Perfect Security". This level of Perfect Security then becomes the benchmark for the rest of the test. Throughout the other five sections, all testing results are reviewed against this benchmark and the final report includes a gap analysis providing solutions to all outstanding vulnerabilities.

Process Security addresses Social Engineering. Through Request, Guided Suggestion, and Trusted Persons testing the tester can gauge the security awareness of your personnel.

The Internet Technology Security Testing section contains what most people view as a security test. Various scans and exploit research will point out any software and configuration vulnerabilities along with comparing the business justifications with what is actually being deployed.

Communications Security Testing involves testing Fax, Voicemail and Voice systems. These systems have been known to be exploited causing their victims to run up costly bills. Most of these exploits will go unknown without being tested.

Wireless Security Wireless Technology has been gaining in use rapidly over the last few years. The Wireless Security Testing section was created to address the gaping exploits that can be found due to misconfigurations by engineers with limited knowledge of the recent technology.

Physical Security Testing section This section checks areas such as physical access control and the environmental and political situations surrounding the site. An example of this may be, if your data center has been placed in the flight path of an airport runway. What is the risk of having an airliner engine jump into your server rack? If you have a redundant data center, then the risk may be assumable. Another risk is having your call center located in a flood plain.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Monday, December 19, 2005

Funny replies developers give to testers..

Top 20 replies by Programmers to Testers when their programs don't work:

20. "That's weird..."
19. "It's never done that before."
18. "It worked yesterday."
17. "How is that possible?"
16. "It must be a hardware problem."
15. "What did you type in wrong to get it to crash?"
14. "There is something funky in your data."
13. "I haven't touched that module in weeks!"
12. "You must have the wrong version."
11. "It's just some unlucky coincidence."
10. "I can't test everything!"
9. "THIS can't be the source of THAT."
8. "It works, but it hasn't been tested."
7. "Somebody must have changed my code."
6. "Did you check for a virus on your system?"
5. "Even though it doesn't work, how does it feel?
4. "You can't use that version on your system."
3. "Why do you want to do it that way?"
2. "Where were you when the program blew up?"
1. "It works on my machine"

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Monday, December 12, 2005

Link for WinRunner Demo

Click here for Free trial version of WinRunner, Mercury Interactive's enterprise functional testing tool.

Note that, you'll need to contact their sales department in you region.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Saturday, December 03, 2005

What is Testware?

As we know that hardware development engineers produce hardware. Software development engineers produce software. Same like this, Software test engineers produce testware.

Testware is produced by both verification and validation testing methods. Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained.

Like software, the testware has significant value because it can be reused.The tester’s job is to create testware that is going to have a specified lifetime and is valuable asset to the company.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Thursday, December 01, 2005

Levels of testing

A step into the world of software testing might seem like a nightmare, with memories of unending test scripts
providing weeks of tedious repetitive testing and mountains of problem reports. Even after apparently exhaustive testing, bugs are still found, there is rework to carry out and inevitably yet more testing. But with careful planning, software testing can go like a dream.

Levels of testing:
The key to successful test strategies is picking the right level of testing at each stage in a project. The first point to take into account is that testing will not find all bugs and so a decision has to be made about the level of coverage required. This will be dependent on the software application and the level of quality required by the end users. A safety critical system such as a commercial fly-by-wire will clearly need a far higher degree of testing than a cheap games package for the domestic market.


This sort of decision needs to be made at an early stage in a project and will be documented in an overall software test plan. Typically this general plan will be drafted at the same stage that the requirements are catalogued and would be finalized once functional design work has been completed. It is now that confusion over the different levels of testing often arises. The adoption of the simple philosophy that each design document requires a corresponding test document can cut right through this confusion. The definition of the requirements will therefore be followed closely by the preparation of a test document - the Acceptance Tests.

Acceptance Tests are written using the Requirements Specification and apart from the overall test plan, are the only documents, which would be used in writing the Acceptance Tests. The point of Acceptance Tests is to demonstrate that the requirements have been met and therefore there needs to be no other input. It follows that the Acceptance Tests will be written at a fairly abstract level, as there is little detail at this stage of how the software will operate. An important point to bear in mind for all testing is that the tests should be written by a person who has some degree of independence from the project. This may be achieved by employing an external software consultant, or it may be adequate to use someone other than the author of the Requirements Specification. This will help to remove ambiguity from the design documents and to ensure that the tests reflect the design, rather than testing that has already been performed.

Following the classic software lifecycle, the next major document to be produced is the Functional Specification (External Design or Logical Design), which provides the first translation of the Requirements into a working system, and it is here that the system tests are written. As for the overall system it would be usual to write a System Test Plan, which will describe the general approach to how the system testing will be carried out. This might define the level of coverage (e.g. the level of validation testing to be carried out on user enterable fields) and the degree of regression testing to be included. In any case, it would normally only exercise the system as far as described in the Functional Specification. System Tests will be based on the Functional Specification, and this will often be the only document used as input.

The lowest level of testing is module level and this is based on the individual module designs. This testing will normally be the most detailed and will usually include areas such as range checking on input fields, exercising of output messages and other module level details. Following from the previous analogies, the module tests will be based on the individual module designs. Splitting the tests into these different segments helps to keep re-testing to a minimum. A problem discovered in one module during System Testing will simply mean re-executing that module test, followed by the System Tests.

Other levels:
Many other types of testing are available (e.g. installation, load, performance), though all will normally fall under one of the previous categories. One of the most important is Regression Testing, which is used to confirm that a new release of software has not regressed (i.e. lost functionality). In an extreme case, (which might be the product of a poorly planned test program) all previous tests must be re-executed on the new system. This may be acceptable for a system with automated testing capabilities, but for an interactive system it could mean an extensive use of test staff. For a software release it may be perfectly adequate to re-execute a subset of the module and system tests, and to write a new acceptance test based only on the requirements for this release.

Conclusion
Testing can make or break a project - it is vital that testing is planned to ensure that adequate coverage is
achieved. The emphasis here is adequate - too much testing can be uneconomic, too little can result in poor quality software.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!