Interview Questions and Answers for QTP (Quick Test Professional)

Friday, September 30, 2005

Test Case Maintenance

Changes to Product features can happen many times during a product development lifecycle. These may be driven by change of customer requirements, design changes or some times as late as customer acceptance tests. Many times, making necessary changes reported by customers can be crucial. In such scenarios test cases drawn by Test Engineers can become obsolete rendering the whole effort in test planning a fruitless exercise.

Planning for Test Case Maintenance is critical as -
- Test cases may become redundant due to behavior change.
- Expected Outcome may change for some test cases.

- Additional test cases need to be added because of altered conditions.

For effective Test Case Maintenance:
1. It is important to keep the test case list updated to reflect changes of product features otherwise for next phase of testing there will be a tendency to discard current test case list and start afresh.


2. Making the Test Case a 'Work Sheet' is very important. On daily basis this test case list should be updated. To enable each member in the team to update test cases the Excel Worksheet was kept in a shared drive and workbook was shared to allow concurrent editing.

3. Keeping track of Test Cases Metrics complements Defect Reports Metrics and gives a better view of Product quality. When you read that there are 30 defects pending and only 50 test cases are pending with 1200 test cases successful, you have better knowledge of product quality.

4. Maintaining the test case list updated works as proof of testing. In future when a defect is reported, it will be possible to validate whether this case has been covered during testing or not.

5. This list of updated test cases list can be used by anyone who will do the testing for next version of product. Same team need not work on next release as test case list matches 'current' behavior of application.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, September 28, 2005

KPA in CMM

The Key Process Areas (KPAs) define "building blocks" based on industry best practices. The
ultimate goal is to establish "continual improvement" of the software engineering process and the resulting products. The KPAs of lower levels focus mainly on management processes (and industry minimal standards). While the KPAs of higher levels focus more on organizational and technical processes (and more on industry best practice).

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, September 27, 2005

Localization Testing

Around one year ago I got the shock of my life as a tester when my Lead asked me to test our product for the localized versions. I was supposed to test our product on French and German platforms and one can imagine my condition as I didn’t have a faintest idea about either French or German. That was my first encounter with Localization Testing. It was not as bad as it sounds and in fact after a few days I found it quite interesting as I got a chance to learn a little about new languages and operating systems. All said and done let’s now delve a bit deeper into Localization Testing.

What is Localization?
Before going on to Localization Testing first let’s know what Localization is. Localization is the process of adapting software for a new region. This involves change of language and some changes that will help the product adapt to the new culture. Localization of a product is carried out only when the product is stable in the original version.

What is Localization Testing?
Localization Testing is the testing that is carried out to check the quality of a product’s localization for a particular target culture/locale. Localization testing can be executed only on the localized version of a product. The testing efforts should be focused towards the following areas to see if the product conforms to the required locale or market segment:

- Appearance
One of the major areas affected by localization is the User Interface (UI) of the product. The localized version must have the same look and feel of the original version. One important thing that needs to be looked at is the text appearing in the UI because “translated text always expands”.

- Culture/locale-specific and language/region-specific areas
A localization tester should consider the target region’s ‘Character Sets’ and ‘Keyboard Layouts’ as there is bound to be a major difference between original version’s and the localized version’s character sets and keyboard layouts. So consideration should be given towards small things such as hotkeys, uppercase and lowercase conversion, garbled translation etc.
Another important area is to check that all symbols and images must be appropriate for the target culture and market.

- Functionality
The localized version should have the same functionality as the original version. To ensure this, the basic functionality tests should be carried out. Setup and upgrade tests should be run in the localized environment. Application and Hardware compatibility tests also need to be planned according to the products target region.

- Technical Consistency
Consistency between the written documentation and the software product must be guaranteed. When shipping a localized product it should be ensured that localized documentation (manuals, online help, context help, etc.) is included.

Items that need to be checked are:
1. The quality of the translation.
2. The completeness of the translation.
3. Terminology is used consistently in all documents and application UI.

As the world is shrinking more and more with each passing day, every software firm wants to venture into newer markets and this can be achieved only by catering to the people’s needs by giving them products in their own languages.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, September 23, 2005

Responsibilities of a QA Engineer

Title of the post should ideally be "Responsibilities of a Test Engineer" :D

− Understand project requirements.
− Prepare / Update the Test case document for testing of the application from all aspects.
− Prepare the test setup.
− Deploy the build in the required setup.
− Conduct the Testing including Smoke, Sanity, and Bug Bash / Execute the Test cases.
− Update the Test Result document.
− Attend the Regular client calls.
− Log / File the defects in Defect tracking tool / Bug Report.
− Verify defects.
− Discuss doubts/queries with Development Team / Client.
− Conduct internal trainings on various products.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Thursday, September 22, 2005

Exploratory Testing

Have you ever tested a piece of software that you were not very familiar with? Also, you might not have had the luxury of ready-made test cases at hand and could not afford spending time to write them. Nevertheless you went ahead to test the product and did a quite a good job. If you remember, it required a lot of logical thinking, formulating your thoughts into test cases in real time and concurrently executing them. The all-important question that arises here is- “Is this sort of testing justified and is it productive?” Both these queries were quite well answered when I stumbled on the form of testing called- exploratory testing.

According to Cem Kaner who has coined the term “exploratory testing", “most testers spend 25% of their time doing exploratory testing, with 25% creating and documenting new test cases and 50% doing regression testing (i.e. running old tests).” Let us delve a little further into what exploratory testing is all about.

Over the last few years exploratory testing has been recognized as a very powerful and interesting form of testing. Here a test case is designed and executed concurrently, unlike scripted testing (manual or automated), where you have your test cases created prior to actual testing. This from of testing does not really adhere to a pre-defined plan. Please don’t confuse this with “ad hoc” testing, which is looking for defects randomly.

The most important distinction between “ad hoc” testing and exploratory testing is that the former can be carried out by anyone but exploratory testing is a thoughtful approach to testing and driven by logic. It is an intellectually challenging process where one is limited only by ones own imagination and understanding of the software being tested. It provides enough scope to extend the reach of testing to certain areas that cannot be easily accommodated in a scripted test case.

James Bach, whose contributions and research works have been instrumental in establishing exploratory testing as a useful test methodology, has identified five key elements that an exploratory tester should focus on:

Product Exploration.
As you discover a product, make a note of its functions, their purpose, types of data processed, and areas of potential instability. How well you comprehend the product, the underlying technologies and the time available will determine the effectiveness with which you perform exploration. As you explore, you should
construct a mental-model of how the product works.

Test Design.
You should decide on strategies related to the way you will test the product, observe results, and make inferences.

Test Execution.
This involves testing the product and observing its behavior. The information gathered should be used to formulate hypotheses about the functionality of the product. It’s also beneficial to document test ideas that occur to you while testing.

Heuristics.
These are a set of rules that help you define what is to be tested and how.

Reviewable Results.
The intention of exploratory testing is to produce results. Once you have produced deliverables that meet the specified requirements you can say that it is complete. From a QA perspective, the deliverables must be reviewable and defensible for certification.

A good point to start exploratory testing is when we come across a new or unstable product or software that has not been tested. It is also useful if you are aiming for fast results, trying to reproduce a bug, simplify defect reports. As the stability of a product increases, it can be complemented by scripted testing- manual or automated.


While scripted tests give you a more detailed idea of the test coverage, especially during regression test cycles, exploratory can unearth new defects and extend your existing test cases. The knowledge gained during exploratory testing can be used to augment scripted testing by setting up a feedback mechanism to update existing test cases or create new ones. Despite the differences in the approaches both these forms of testing are perfectly compatible and go hand-in-hand.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Wednesday, September 21, 2005

What You Don't Know May Help You

Some testers take it upon themselves to learn as much as possible about the inner workings of the system under test. This type of "gray box" testing is valuable, and most testers have the technical wherewithal to grasp much of what's going on behind the scenes. But it's important to recognize that sometimes "ignorance is strength" when it comes to finding problems that users will encounter.

When it comes to software testing, ignorance can actually be a source of strength. In particular, software testers who are not familiar with how the system has been put together can be in a better position to find certain bugs than testers who may be more familiar with the system internals.

Testers are often treated as second-class citizens, and this makes many of them eager to learn as much as they can about the software they are testing in order to prove their technical competence. This is no wrong, but we nned to recognize the maximum benefit of the inherent unfamiliarity that testers have when they are new to a project.

It is well known that the value of independent testing partly comes from having a set of fresh eyes on the software product. They are more apt to try things that might fail or notice problems that others have overlooked. Testers who are new to a project-with the least knowledge about the mechanics-bring the freshest sets of eyes.

Our software products will not be successful if we expect our users to have to understand the inner workings of our software. Therefore, testers without this understanding can teach us a valuable lesson about how our software will be used. There are several areas of testing in which "ignorant" testers can be helpful.

Usability
Testers unfamiliar with the inner workings of a product can often be very helpful in identifying usability problems. They might notice that the program flow is confusing or that terms used are hard to understand. Smart companies regularly put new employees through usability tests. This gives the company the benefit of the new employees' fresh perspectives, gives the employees insight into how the products they are going to help develop will be used, and puts everyone on notice that design is important. Even if you don't have such a program, you can still make sure that new testers are given a chance to express their observations regarding product usability.

Installation and Setup
Software installation and configuration are areas that are often handled late in the development process. As a consequence, early testers often must learn how to manually install and configure the software. Workarounds are common, perhaps requiring the manual copying of certain files or the manual creation of particular accounts or data sets. Testers who come to a project later won't have been trained to avoid these problems and are thus more likely to stumble across installation problems that had been deferred.

Error Handling
New testers are less likely to know how the software is "supposed" to be used and therefore are more likely to stumble across errors. It is important for software to handle errors appropriately (give appropriate notice of invalid input, provide options for recovery, and ensure that no data is lost). Error handling code is always a good place to look for defects. Informed testers will want to plan to exercise all error conditions, but the most important ones to check are the ones that the programmers didn't anticipate. "Uninformed testing" is one good strategy for finding them.

Program Flow
Early defects in software can train testers to avoid certain paths. I remember one product I tested where problems were likely if you closed dialogs with the x-button at the upper-right corner, rather than using the close button at the bottom. This happened often enough that the experienced testers never used the x-button. With time, many of these problems were fixed, but the testers still avoided the x-button as a matter of trained behavior. A new tester, untrained to avoid the x-button, found additional instances of these problems that had been missed.


Documentation
If you already understand the system, it is hard to read documentation as if you don't. This is why all new testers on a project should be expected to review documentation. As a new tester on a project, I once found a bad documentation bug in the product tutorial. The problem was with the instructions for setting up some data in an early chapter. They were incorrect, but the error was of no consequence until several chapters later, when further processing wouldn't give you the documented results. Even though this version of the documentation was in use in the field, everyone internally thought that someone else knew about the problem and was taking care of it.

It is certainly true that "ignorant" testers are more likely to report bugs that won't actually be fixed, either because the error is theirs or because they are simply reporting a known design problem that is not going to be fixed. Nonetheless, the value of the fresh perspective they bring makes it worth having to sort through their bug reports. It also sets up a healthier dynamic than when testers try to anticipate which problems are actually valid before they report them.

Thus we get the most out of the staff we have and not feel like they need to know more than they do in order to provide valuable contributions. That said, we can't argue against learning and increasing our knowledge. We can learn a lot about the internal workings of the systems we test and this "gray box" information is incredibly valuable to developing sound testing strategies.


When we get new testers on staff, let's make sure to benefit from the fresh eyes they bring and recognize the value of what they don't know.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, September 20, 2005

Checklist for conducting Unit Tests

- Is the number of input parameters equal to number of arguments?
- Do parameter and argument attributes match?
- Do parameter and argument units system match?
- Is the number of arguments transmitted to called modules equal to number of parameters?
- Are the attributes of arguments transmitted to called modules equal to attributes of parameters?
- Is the units system of arguments transmitted to called modules equal to units system of parameters?
- Are the number of attributes and the order of arguments to built-in functions correct?
- Are any references to parameters not associated with current point of entry?
- Have input only arguments altered?
- Are global variable definitions consistent across modules?
- Are constraints passed as arguments?

When a module performs external I/O, additional interface tests must be conducted:
- File attributes correct?
- OPEN/CLOSE statements correct?
- Format specification matches I/O statement?
- Buffer size matches record size?
- Files opened before use?
- End-of-file conditions handled?
- I/O errors handled?
- Any textual errors in output information?

The local data structure for a module is a common source of errors. Test cases should be designed to uncover errors in the following categories:
- improper or inconsistent typing
- erroneous initialization or default values
- incorrect (misspelled or truncated) variable names
- inconsistent data types
- underflow, overflow and addressing exceptions

From a strategic point of view, the following questions should be addressed:
- Has the component interface been fully tested?
- Have local data structured been exercised at their boundaries?
- Has the cyclomatic complexity of the module been determined?
- Have all independent basis paths been tested?
- Have all loops been tested appropriately?
- Have data flow paths been tested? Have all error handling paths been tested?

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Monday, September 19, 2005

Spike Testing

Spike Test is used in terms of Load/Performance testing. Spike Tests are tests that use real-world
distributions and user communities, but under extremely fast ramp up and ramp down times. It is common to execute stress tests that ramp up to 100% or 150% of expected peak expected user-load in a matter of minutes rather than about an hour.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Saturday, September 17, 2005

Need of Test Plan and Test Cases for testing

Test Plan identifies the test activities and also defines the objective, approach, and schedule of intended testing activities, while Test Cases comprises of the test procedure, test conditions and expected result.

Writing a test plan early in the project lifecycle, and having it peer reviewed by Development in general helps reduce the workload later in the project lifecycle. This allows testers to quickly and unambiguously complete the majority of the testing required, which will provide more time for "Ad Hoc", "Real World", and "User Scenario" testing of the product.

Test case is comprised of a test condition, an expected result, and a procedure for performing the test case. These can be performed either in combination with other test cases or in isolation.

A testcase is the difference between saying that something seems to be working okay and proving that a
set of specific tasks is known to be working correctly.


Test Plan talks about "What has to be tested" while Test Cases talk about "How to test" because of which both these documents are of equal importance in testing.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, September 16, 2005

Professional Characteristics of a good SQA Engineer

Professional characteristics of a good QA (Quality Assurnace) Engineer:

• Understanding of business approach and goals of the organization
• Understanding of entire software development process
• Strong desire for quality
• Establish and enforce SQA methodologies, processes and Testing Strategies
• Judgment skills to assess high-risk areas of application
• Communication with Analysis and Development team
• Report defects with full evidence
• Take preventive actions
• Take actions for Continuous improvement
• Reports to higher management
• Say No when Quality is insufficient
• Work Management
• Meet deadlines

Difference between Quality Assurance and Quality Control (Testing)

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Thursday, September 15, 2005

Test Case Checklist

Here is a checklist for having well-documented test cases:

Quality Attributes
- Accurate - tests what the description says it will test.
- Economical - has only the steps needed for its purpose
- Repeatable, self standing - same results no matter who tests it.
- Appropriate - for both immediate and future testers
- Traceable - to a requirement
- Self cleaning - returns the test environment to clean state

Structure and testability
- Has a name and number
- Has a stated purpose that includes what requirement is being tested
- Has a description of the method of testing
- Specifies setup information - environment, data, prerequisite tests, security access
- Has actions and expected results
- States if any proofs, such as reports or screen grabs, need to be saved
- Leaves the testing environment clean
- Uses active case language
- Does not exceed 15 steps
- Matrix does not take longer than 20 minutes to test
- Automated script is commented with purpose, inputs, expected results
- Setup offers alternative to prerequisite tests, if possible
- Is in correct business scenario order with other tests

Configuration management
- Employs naming and numbering conventions
- Saved in specified formats, file types
- Is versioned to match software under test
- Includes test objects needed by the case, such as databases
- Stored as read
- Stored with controlled access
- Stored where network backup operates
- Archived off-site

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Tuesday, September 13, 2005

Smoke Testing

Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also sometimes known as ad hoc testing, i.e. testing without a formal test plan.

With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development life cycle, this can be the only kind of testing that can be performed.

Smoke testing, by definition, is not exhaustive, but, over time, you can increase your coverage of smoke testing.

A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested.

Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale. Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable enough to be tested more thoroughly. Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any errors in development and future problems during integration. At first, smoke testing might be the testing of something that is easy to test. Then, as the system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Monday, September 12, 2005

Difference between System Testing and Integration Testing

System testing is high level testing, and integration testing is a lower level testing.

Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa.

For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components. For system testing, on the other hand, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.

The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. The purpose of system testing, on the other hand, is to validate an application's accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Saturday, September 10, 2005

How can software QA processes be implemented without stifling productivity?

Implement QA processes slowly over time.

Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures. Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease and there will be improved focus and less wasted effort.

At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process. However, no one, especially talented technical types, likes bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, September 09, 2005

What makes a good QA/Test Manager?

QA/Test Managers are

- familiar with the software development process
- able to maintain enthusiasm of their team and promote a positive atmosphere
- able to promote teamwork to increase productivity
- able to promote cooperation between Software and Test/QA Engineers
- have the people skills needed to promote improvements in QA processes
- have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to
- able to communicate with technical and non-technical people
- able to run meetings and keep them focused.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Thursday, September 08, 2005

Do automated testing tools make testing easier?

Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile.

A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret. If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change. One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task. You CAN learn to use automated testing tools, with little or no outside help.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Monday, September 05, 2005

Installation Testing

Installation testing is often the most under tested area in testing. This type of testing is performed to ensure that all Installed features and options function properly. It is also performed to verify that all necessary components of the application are, indeed, installed.

Installation testing should take care of the following points: -

1. To check if while installing product checks for the dependent software / patches say Service pack3.
2. The product should check for the version of the same product on the target machine, say the previous version should not be over installed on the newer version.
3. Installer should give a default installation path say “C:\programs\.”
4. Installer should allow user to install at location other then the default installation path.
5. Check if the product can be installed “Over the Network”
6. Installation should start automatically when the CD is inserted.
7. Installer should give the remove / Repair options.
8. When uninstalling, check that all the registry keys, files, Dll, shortcuts, active X components are removed from the system.
9. Try to install the software without administrative privileges (login as guest).
10. Try installing on different operating system.
11. Try installing on system having non-compliant configuration such as less memory / RAM / HDD.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Friday, September 02, 2005

Regression Testing

Regression testing as the name suggests is used to test / check the effect of changes made in the code.
Most of the time the testing team is asked to check last minute changes in the code just before making a release to the client, in this situation the testing team needs to check only the affected areas.
So in short for the regression testing the testing team should get the input from the development team about the nature / amount of change in the fix so that testing team can first check the fix and then the side effects of the fix.

In fact the regression testing is the testing in which maximum automation can be done. The reason being the same set of test cases will be run on different builds multiple times.
But again the extent of automation depends on whether the test cases will remain applicable over the time, In case the automated test cases do not remain applicable for some amount of time then test engineers will end up in wasting time to automate and don’t get enough out of automation.

What is Regression testing?
Regression Testing is retesting unchanged segments of application. It involves rerunning tests that have been previously executed to ensure that the same results can be achieved currently as were achieved when the segment was last tested.The selective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. Also referred to as verification testing, regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity.

What do you do during Regression testing?
- Rerunning of previously conducted tests
- Reviewing previously prepared manual procedures
- Comparing the current test results with the previously executed test results

What are the tools available for Regression testing?
Although the process is simple i.e. the test cases that have been prepared can be used and the expected results are also known, if the process is not automated it can be very time-consuming and tedious operation.

Some of the tools available for regression testing are:
Record and Playback tools – Here the previously executed scripts can be rerun to verify whether the same set of results are obtained. E.g. Rational Robot

What are the end goals of Regression testing?
- To ensure that the unchanged system segments function properly
- To ensure that the previously prepared manual procedures remain correct after the changes have been made to the application system
- To verify that the data dictionary of data elements that have been changed is correct

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!

Thursday, September 01, 2005

Content Management Testing

‘Content Management’ has gained a predominant importance after the Web applications took a major part of our lives. What is Content Management? As the name denotes, it is managing the content. How do they work? Let us take a common example. You are in China and you wanted to open the Yahoo! Chinese version. When you choose Chinese version on the main page of Yahoo! You get to see the entire content in Chinese. Yahoo! Would strategically plan and have various servers for various languages. When you choose a particular version of the page, the request is redirected to the server which manages the Chinese content page. The Content Management systems help is placing content for various purposes and also help in displaying when the request comes in.

Content Management Testing involves:
1. Testing the distribution of the content.

1. Request, Response Times.
2. Content display on various browsers and operating systems.
3. Load distribution on the servers.
In fact all the performance related testing should be performed for each version of the web application which uses the content management servers.

QTP and Winrunner Questions and Answers
Contact: qualityvista @ gmail.com

Post to: IpadIt! | blinkbits | blinklist | Blogmarks | co.mments | del.icio.us | de.lirio.us | digg It! | Fark| feedmelinks | Furl | LinkaGoGo | Ma.gnolia | Netscape | Newsvine | Netvouz | RawSugar | Reddit | scuttle | Shadows | Shoutwire | Simpy | Smarking | Spurl | TailRank | Wists | YahooMyWeb!