Saturday, 9 January 2010
Software Testing life cycle
Test Plan Preparation
The software test plan is the primary means by which software testers communicate to the product development team what they intend to do. The purpose of the software test plan is to prescribe the scope, approach, resource, and schedule of the testing activities. To identify the items being tested, the features to be tested, the testing tasks to be preformed, the personnel responsible for each task, and the risks associated with the plan.
The test plan is simply a by-product of the detailed planning process that’s undertaken to create it. It’s the planning that matters, not the resulting documents. The ultimate goal of the test planning process is communicating the software test team’s intent, its expectations, and its understanding of the testing that’s to be performed.
The following are the important topics, which helps in preparation of Test plan.
High-Level Expectations
The first topics to address in the planning process are the ones that define the test team’s high-level expectations. They are fundamental topics that must be agreed to, by everyone on the project team, but they are often overlooked. They might be considered “too obvious” and assumed to be understood by everyone, but a good tester knows never to assume anything.
People, Places and Things
Test plan needs to identify the people working on the project, what they do, and how to contact them. The test team will likely work with all of them and knowing who they are and how to contact them is very important.
Similarly, where documents are stored, where the software can be downloaded from, where the test tools are located, and so on need to be identified.
Inter-Group Responsibilities
Inter-Group responsibilities identify tasks and deliverables that potentially affect the test effort. The test team’s work is driven by many other functional groups – programmers, project manages, technical writers, and so on. If the responsibilities aren’t planned out, the project, specifically the testing, can become a worst or resulting in important tasks been forgotten.
Test phases
To plan the test phases, the test team will look at the proposed development model and decide whether unique phases, or stages, of testing should be performed over the course of the project. The test planning process should identify each proposed test phase and make each phase known to the project team. This process often helps the entire team from and understands the overall development model.
Test strategy
The test strategy describes the approach that the test team will use to test the software both overall and in each phase. Deciding on the strategy is a complex task- one that needs to be made by very experienced testers because it can determine the successes or failure of the test effort.
Bug Reporting
Exactly what process will be used to manage the bugs needs to be planned so that each and every bug is tracked, from when it’s found to when it’s fixed – and never, ever forgotten.
Metrics and Statistics
Metrics and statistics are the means by which the progress and the success of the project, and the testing, are tracked. The test planning process should identify exactly what information will be gathered, what decisions will be made with them, and who will be responsible for collecting them.
Risks and Issues
A common and very useful part of test planning is to identify potential problem or risky areas of the project – ones that could have an impact on the test effort.
Test Case Design
The test case design specification refines the test approach and identifies the features to be covered by the design and its associated tests. It also identifies the test cases and test procedures, if any, required to accomplish the testing and specifics the feature pass or fail criteria. The purpose of the test design specification is to organize and describe the testing needs to be performed on a specific feature.
The following topics address this purpose and should be part of the test design specification that is created:
Test case ID or identification
A unique identifier that can be used to reference and locate the test design specification the specification should also reference the overall test plan and contain pointers to any other plans or specifications that it references.
Test Case Description
It is a description of the software feature covered by the test design specification for example, “ the addition function of calculator,” “font size selection and display in word pad,” and “video card configuration testing of quick time.”
Test case procedure
It is a description of the general approach that will be used to test the features. It should expand on the approach, if any, listed in the test plan, describe the technique to be used, and explain how the results will be verified.
Test case Input or Test Data
It is the input the data to be tested using the test case. The input may be in any form. Different inputs can be tried for the same test case and test the data entered is correct or not.
Expected result
It describes exactly what constitutes a pass and a fail of the tested feature. Which is expected to get from the given input.
Test Execution and Test Log Preparation
After test case design, each and every test case is checked and actual result obtained. After getting actual result, with the expected column in the design stage is compared, if both the actual and expected are same, then the test is passed otherwise it will be treated as failed.
Now the test log is prepared, which consists of entire data that were recorded, whether the test failed or passed. It records each and every test case so that it will be useful at the time of revision.
Defect Tracking
A defect can be defined in one or two ways. From the producer's viewpoint, a defect is a deviation from specifications, whether missing, wrong, etc. From the Customer's viewpoint, a defect is any that causes customer dissatisfaction, whether in the requirements or not, this is known as "fit for use". It is critical that defects identified at each stage of the project life cycle be tracked to resolution.
Defects are recorded for following major purposes:
To correct the defect
To report status of the application
To gather statistics used to develop defect expectations in future applications
To improve the software development process
Most project teams utilize some type of tool to support the defect tracking process. This tool could be as simple as a white board or a table created and maintained in a word processor or one of the more robust tools available today, on the market, such as Mercury's Test Director etc. Tools marketed for this purpose usually come with some number of customizable fields for tracking project specific data in addition to the basics. They also provide advanced features such as standard and ad-hoc reporting, e-mail notification to developers and/or testers when a problem is assigned to them, and graphing capabilities.
At a minimum, the tool selected should support the recording and communication significant information about a defect. For example, a defect log could include:
Defect ID number
Descriptive defect name and type
Source of defect -test case or other source
Defect severity
Defect priority
Defect status (e.g. open, fixed, closed, user error, design, and so on) -more robust tools provide a status history for the defect
Date and time tracking for either the most recent status change, or for each change in the status history
Detailed description, including the steps necessary to reproduce the defect
Component or program where defect was found
Screen prints, logs, etc. that will aid the developer in resolution process
Stage of origination
Person assigned to research and/or correct the defect
Severity versus Priority
The severity of a defect should be assigned objectively by the test team based on pre defined severity descriptions. For example a "severity one" defects maybe defined as one that causes data corruption, a system crash, security violations, etc. In large project, it may also be necessary to assign a priority to the defect, which determines the order in which defects should be fixed. The priority assigned to a defect is usually more subjective based upon input from users regarding which defects are most important to them, and therefore should be fixed first.
It is recommended that severity levels be defined at the start of the project so that they intently assigned and understood by the team. This foresight can help test teams avoid the common disagreements with development teams about the criticality of a defect.
Some general principles
The primary goal is to prevent defects. Wherever this is not possible or practical, the goals are to both find the defect as quickly as possible and minimize the impact of the defect.
The defect management process, like the entire software development process, should be risk driven, i.e., strategies, priorities and resources should be based on an assessment of the risk and the degree to which the expected impact of risk can be reduced.
Defect measurement should be integrated into the development process and be used by the project team to improve the development process. In other words, information on defects should be captured at the source as a natural by-product of doing the job. People unrelated to the project or system should not do it.
As much as possible, the capture and analysis of the information should be automated. There should be a document, which includes a list of tools, which have defect management capabilities and can be used to automate some of the defect management processes.
Defect information should be used to improve the process. This, in fact, is the primary reason for gathering defect information.
Imperfect or flawed processes cause most defects. Thus, to prevent defects, the process must be altered.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment