Monday, 11 January 2010

Working with Arrays in QTP


Dealing with Arrays in QTP



What are Arrays?


An array is a contiguous area in the memory referred to by a common name. It is a series of variables having the same data type. Arrays are used to store related data values. VBScript allows you to store a group of common values together in the same location. These values can be accessed with their reference numbers.

An array is made up of two parts, the array name and the array subscript. The subscript indicates the highest index value for the elements within the array. Each element of an array has a unique identifying index number by which it can be referenced. VBScript creates zero based arrays where the first element of the array has an index value of zero.

Declaring Arrays

An array must be declared before it can be used. Depending upon the accessibility, arrays are of two types:

· Local Arrays: A local array is available only within the function or procedure, where it is declared.
· Global Arrays: A global array is an array that can be used by all functions and procedures. It is declared at the beginning of the VBScript Code.

The Dim statement is used to declare arrays. The syntax for declaring an array is as follows:

Dim Array Name(subscript value)

Where, Array Name is the unique name for the array and Subscript Value is a numeric value that indicates the number of elements in the array dimension within the array.

Static and Dynamic Arrays:
VBScript provides flexibility for declaring arrays as static or dynamic.

A static array has a specific number of elements. The size of a static array cannot be altered at run time. A dynamic array can be resized at any time. Dynamic arrays are useful when size of the array cannot be determined. The array size can be changed at run time.

Ways to work with Arrays

The easiest way create an array is to simply declare it as follows
Dim strCustomers()

Another method is to define a variable and then set it as an array afterwards

Dim strStaff
strStaff = Array("Alan","Brian","Chris")


Yet another way is to use the split command to create and populate the array

Dim strProductArray
strProductArray = "Keyboards,Laptops,Monitors"
strProductArray = Split(strProductArray, ",")


To itterate through the contents of an array you can use the For Each loop


Dim strProductArray
strProductArray = "Keyboards,Laptops,Monitors"
strProductArray = Split(strProductArray, ",")
Dim i
For i=LBound(strProductArray) To UBound(strProductArray)
Msgbox strProductArray(i)
Next

This will itterate through the array backwards

Dim strProductArray
strProductArray = "Keyboards,Laptops,Monitors"
strProductArray = Split(strProductArray, ",")
For i = UBound(strProductArray) To LBound(strProductArray) Step -1
Msgbox strProductArray(i)
Next

To add extra data to an array use Redim Preserve
Dim strProductArray
strProductArray = "Keyboards,Laptops,Monitors"
strProductArray = Split(strProductArray, ",")
For i = UBound(strProductArray) To LBound(strProductArray) Step -1
Msgbox strProductArray(i)
next
Redim Preserve strProductArray(3)
strProductArray(3) = "Mice"
For i = UBound(strProductArray) To LBound(strProductArray) Step -1
Msgbox strProductArray(i)
next

To itterate through the contents of an array you can use the For Each loop

Dim strProductArray
strProductArray = "Keyboards,Laptops,Monitors"
strProductArray = Split(strProductArray, ",")
Dim strItem
For Each strItem In strProductArray
MsgBox strItem
Next

To store the contents of an array into one string, use Join

Dim strProductArray
strProductArray = "Keyboards,Laptops,Monitors"
strProductArray = Split(strProductArray, ",")
Msgbox Join(strProductArray, ",")

To delete the contents of an array, use the Erase command

Dim strProductArray
strProductArray = "Keyboards,Laptops,Monitors"
strProductArray = Split(strProductArray, ",")
Erase strProductArray
Msgbox Join(strProductArray, ",")

Virtual Objects In QTP

When QTP fails to identify an object in the application,virtual objects are used.It is a feature provided by QTP where we can define an area that is nonstandard object in the application to a class and methods can be applied.Now you must be thinking that way QTP doesn't recognise any object.It mainly happens because we don't have proper add ins.Using virtual Object Wizard we can map non standard objects to standard objects.

When you want QuickTest to recognize virtual objects during
recording, ensure that the Disable recognition of virtual objects while
recording check box in the General tab of the Options dialog box is cleared.
Tools-Option-General.

Group of Virtual Objects that i stored in the virtual Object Manager under a descriptive name is knows as virtual object collection.

Virtual Objects collection definitons are saved with .vot extensions under QTP\Dat\vo template folder.
If you want to copy virtual object collection from one computer to another then you have to copy the .vot file and paste it in the same location.

virtual objects can only be defined for objects on which you can click or double click.

QuickTest identifies a virtual object according to its boundaries. Marking an object’s boundaries specifies its size and position on a Web page or application window. When you assign a test object as the parent of your virtual object, you specify that the coordinates of the virtual object boundaries are relative to that parent object. When you record a test,QuickTest recognizes the virtual object within the parent object and adds its a test object in the object repository so that QuickTest can identify the object during the run session. QuickTest also recognizes the virtual object as a test object when you add it manually to the object repository.

Disadvantage:- When you assign a test object as virtual object through the wizard you specifiy the coordinates of the virtual object boundaries.virtual object loses its identification on resizing the page.

Saturday, 9 January 2010

Software Testing life cycle



Test Plan Preparation
The software test plan is the primary means by which software testers communicate to the product development team what they intend to do. The purpose of the software test plan is to prescribe the scope, approach, resource, and schedule of the testing activities. To identify the items being tested, the features to be tested, the testing tasks to be preformed, the personnel responsible for each task, and the risks associated with the plan.

The test plan is simply a by-product of the detailed planning process that’s undertaken to create it. It’s the planning that matters, not the resulting documents. The ultimate goal of the test planning process is communicating the software test team’s intent, its expectations, and its understanding of the testing that’s to be performed.

The following are the important topics, which helps in preparation of Test plan.

High-Level Expectations

The first topics to address in the planning process are the ones that define the test team’s high-level expectations. They are fundamental topics that must be agreed to, by everyone on the project team, but they are often overlooked. They might be considered “too obvious” and assumed to be understood by everyone, but a good tester knows never to assume anything.

People, Places and Things
Test plan needs to identify the people working on the project, what they do, and how to contact them. The test team will likely work with all of them and knowing who they are and how to contact them is very important.
Similarly, where documents are stored, where the software can be downloaded from, where the test tools are located, and so on need to be identified.

Inter-Group Responsibilities
Inter-Group responsibilities identify tasks and deliverables that potentially affect the test effort. The test team’s work is driven by many other functional groups – programmers, project manages, technical writers, and so on. If the responsibilities aren’t planned out, the project, specifically the testing, can become a worst or resulting in important tasks been forgotten.
Test phases
To plan the test phases, the test team will look at the proposed development model and decide whether unique phases, or stages, of testing should be performed over the course of the project. The test planning process should identify each proposed test phase and make each phase known to the project team. This process often helps the entire team from and understands the overall development model.
Test strategy
The test strategy describes the approach that the test team will use to test the software both overall and in each phase. Deciding on the strategy is a complex task- one that needs to be made by very experienced testers because it can determine the successes or failure of the test effort.
Bug Reporting
Exactly what process will be used to manage the bugs needs to be planned so that each and every bug is tracked, from when it’s found to when it’s fixed – and never, ever forgotten.
Metrics and Statistics
Metrics and statistics are the means by which the progress and the success of the project, and the testing, are tracked. The test planning process should identify exactly what information will be gathered, what decisions will be made with them, and who will be responsible for collecting them.
Risks and Issues
A common and very useful part of test planning is to identify potential problem or risky areas of the project – ones that could have an impact on the test effort.

Test Case Design
The test case design specification refines the test approach and identifies the features to be covered by the design and its associated tests. It also identifies the test cases and test procedures, if any, required to accomplish the testing and specifics the feature pass or fail criteria. The purpose of the test design specification is to organize and describe the testing needs to be performed on a specific feature.
The following topics address this purpose and should be part of the test design specification that is created:
Test case ID or identification
A unique identifier that can be used to reference and locate the test design specification the specification should also reference the overall test plan and contain pointers to any other plans or specifications that it references.
Test Case Description
It is a description of the software feature covered by the test design specification for example, “ the addition function of calculator,” “font size selection and display in word pad,” and “video card configuration testing of quick time.”
Test case procedure

It is a description of the general approach that will be used to test the features. It should expand on the approach, if any, listed in the test plan, describe the technique to be used, and explain how the results will be verified.
Test case Input or Test Data
It is the input the data to be tested using the test case. The input may be in any form. Different inputs can be tried for the same test case and test the data entered is correct or not.
Expected result
It describes exactly what constitutes a pass and a fail of the tested feature. Which is expected to get from the given input.

Test Execution and Test Log Preparation

After test case design, each and every test case is checked and actual result obtained. After getting actual result, with the expected column in the design stage is compared, if both the actual and expected are same, then the test is passed otherwise it will be treated as failed.
Now the test log is prepared, which consists of entire data that were recorded, whether the test failed or passed. It records each and every test case so that it will be useful at the time of revision.


Defect Tracking

A defect can be defined in one or two ways. From the producer's viewpoint, a defect is a deviation from specifications, whether missing, wrong, etc. From the Customer's viewpoint, a defect is any that causes customer dissatisfaction, whether in the requirements or not, this is known as "fit for use". It is critical that defects identified at each stage of the project life cycle be tracked to resolution.


Defects are recorded for following major purposes:

To correct the defect
To report status of the application
To gather statistics used to develop defect expectations in future applications
To improve the software development process


Most project teams utilize some type of tool to support the defect tracking process. This tool could be as simple as a white board or a table created and maintained in a word processor or one of the more robust tools available today, on the market, such as Mercury's Test Director etc. Tools marketed for this purpose usually come with some number of customizable fields for tracking project specific data in addition to the basics. They also provide advanced features such as standard and ad-hoc reporting, e-mail notification to developers and/or testers when a problem is assigned to them, and graphing capabilities.





At a minimum, the tool selected should support the recording and communication significant information about a defect. For example, a defect log could include:

Defect ID number
Descriptive defect name and type
Source of defect -test case or other source
Defect severity
Defect priority
Defect status (e.g. open, fixed, closed, user error, design, and so on) -more robust tools provide a status history for the defect
Date and time tracking for either the most recent status change, or for each change in the status history
Detailed description, including the steps necessary to reproduce the defect
Component or program where defect was found
Screen prints, logs, etc. that will aid the developer in resolution process
Stage of origination
Person assigned to research and/or correct the defect

Severity versus Priority

The severity of a defect should be assigned objectively by the test team based on pre defined severity descriptions. For example a "severity one" defects maybe defined as one that causes data corruption, a system crash, security violations, etc. In large project, it may also be necessary to assign a priority to the defect, which determines the order in which defects should be fixed. The priority assigned to a defect is usually more subjective based upon input from users regarding which defects are most important to them, and therefore should be fixed first.

It is recommended that severity levels be defined at the start of the project so that they intently assigned and understood by the team. This foresight can help test teams avoid the common disagreements with development teams about the criticality of a defect.

Some general principles

The primary goal is to prevent defects. Wherever this is not possible or practical, the goals are to both find the defect as quickly as possible and minimize the impact of the defect.

The defect management process, like the entire software development process, should be risk driven, i.e., strategies, priorities and resources should be based on an assessment of the risk and the degree to which the expected impact of risk can be reduced.

Defect measurement should be integrated into the development process and be used by the project team to improve the development process. In other words, information on defects should be captured at the source as a natural by-product of doing the job. People unrelated to the project or system should not do it.

As much as possible, the capture and analysis of the information should be automated. There should be a document, which includes a list of tools, which have defect management capabilities and can be used to automate some of the defect management processes.

Defect information should be used to improve the process. This, in fact, is the primary reason for gathering defect information.

Imperfect or flawed processes cause most defects. Thus, to prevent defects, the process must be altered.

verification and validation

In simpler terms, verification answers the question "have we built the product right", i.e. is it correct and free from errors, does it meet the specification whereas
Validation asks the question" is this the right product?" ­have the users got what they wanted. To help you remember this use the following:

Verification
Is it error free, does it do what was specified?
Validation
Is it valid, is this what you really, really want?

Verification requires several types of reviews, including requirements reviews, design reviews,code walkthroughs, code inspections, and test reviews. The system user should be involved in these reviews to find defects before they are built into the system.E.g are Shown in the given below table.


Validation is accomplished simply by executing a real-life function (if you wanted to check to see if your mechanic had fixed the starter on your car, you’d try to start the car).Examples of validation are shown

What is a Process?

A process can be defined as a set of activities that represent the way work is performed. The outcome from a process is usually a product or service. Both software development and software testing are processes.There are two ways to visually portray a process.
One is the Plan Do Check Act (PDCA) cycle.
The other is a workbench.
The PDCA cycle is a conceptual view of a process, while the workbench is a more practical illustration of a process.

The PDCA View of a Process

P – Devise a Plan


Define your objective and determine the conditions and methods required to achieve your objective. Describe clearly the goals and policies needed to achieve the objective at this stage.Express a specific objective numerically. Determine the procedures and conditions for the means and methods you will use to achieve the objective.


D – Execute (or Do) the Plan

Create the conditions and perform the necessary teaching and training to execute the plan.Make sure everyone thoroughly understands the objectives and the plan. Teach workers the procedures and skills they need to fulfill the plan and thoroughly understand the job. Then perform the work according to these procedures.

C – Check the Results
Check to determine whether work is progressing according to the plan and whether the
expected results are obtained. Check for performance of the set procedures, changes in conditions, or abnormalities that may appear. As often as possible, compare the results of the work with the objectives.

A – Take the Necessary Action
If your checkup reveals that the work is not being performed according to plan or that results are not as anticipated, devise measures for appropriate action.
If a check detects an abnormality – that is, if the actual value differs from the target value – search for the cause of the abnormality and eliminate the cause. This will prevent the recurrence of the defect. Usually you will need to retrain workers and revise procedures to eliminate the cause of a defect.


The Workbench View of a Process


A process can be viewed as one or more workbenches.Each workbench is
built on the following two components:

Objective – States why the process exists, or its purpose.
Example: A JAD session is conducted to uncover the majority of customer requirements early and efficiently, and to ensure that all involved parties interpret these requirements consistently.

People Skills – The roles, responsibilities, and associated skill sets needed to execute a process. Major roles include suppliers, owners, and customers.

Each workbench has the following components:

Inputs – The entrance criteria or deliverables needed to perform testing.

Procedures – Describe how work must be done; how methods, tools, techniques,
and people are applied to perform a process. There are Do procedures and Check
procedures. Procedures indicate the “best way” to meet standards.

Deliverables – Any product or service produced by a process. Deliverables can be
interim or external (or major). Interim deliverables are produced within the
workbench, but never passed on to another workbench. External deliverables may
be used by one or more workbench, and have one or more customers. Deliverables
serve as both inputs to and outputs from a process.
Example: JAD Notes are interim and Requirements Specifications are external.

Standards – Measures used to evaluate products and identify nonconformance.
The basis upon which adherence to policies is measured.

Tools – Aids to performing the process.
Example: CASE tools, checklists, templates,

Why Developers are not good testers?

Misunderstandings will not be detected, because the checker will assume that what
the other individual heard from him was correct.

Improper use of the development process may not be detected because the
individual may not understand the process.

The individual may be “blinded” into accepting erroneous system specifications
and coding because he falls into the same trap during testing that led to the
introduction of the defect in the first place.

Information services people are optimistic in their ability to do defect-free work
and thus sometimes underestimate the need for extensive testing.

Without a formal division between development and test, an individual may be
tempted to improve the system structure and documentation, rather than allocate
that time and effort to the test.

What is SDLC?

SDLC Stands for Systems/Software Development Life Cycle.Systems Development Life Cycle (SDLC) is any logical process used by a systems analyst to develop an information system, including requirements, validation, training, and user (stakeholder) ownership. Any SDLC should result in a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, works effectively and efficiently in the current and planned Information Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.

Model of SDLC


Planning->Analysis->Design->Implementation-maintenance


Computer systems have become more complex and often (especially with the advent of Service-Oriented Architecture) link multiple traditional systems potentially supplied by different software vendors. To manage this level of complexity, a number of systems development life cycle (SDLC) models have been created.They are

1) Waterfall Model
2) Spiral Model
3) RAD(Rapid Application Model)
4) Incremental Model
5) Prototype Modelling
6) Agile Model
7) Big Bang Model
8) V model

Strength and Weakness of SDLC



Waterfall Model

A project using waterfall model moves down a series of steps starting from an initial idea to a final product. At the end of each step, the project team holds a review to determine if they’re ready to move to the next step. If the project isn’t ready to progress, it stays at that level until it’s ready. Each phase requires well-defined information, utilizes well-defined process, and results in well-defined outputs. Resources are required to complete the process in each phase and each phase is accomplished through the application of explicit methods, tools and techniques.

The Waterfall model is also called the Phased model because of the sequential move from one phase to another, the implication being that systems cascade from one level to the next in smooth progression. It has the following seven phases of development:

The figure represents the Waterfall Model.



Notice three important points about this model.

There’s a large emphasis on specifying what the product will be.
The steps are discrete; there’s no overlap.
There’s no way to back up. As soon as you’re on a step, you need to complete the tasks for that step and then move on.

Spiral Model

The traditional software process models don't deal with the risks that may be faced during project development. One of the major causes of project failure in the past has been negligence of project risks. Due to this, nobody was prepared when something unforeseen happened. Barry Boehm recognized this and tried to incorporate the factor, project risk, into a life cycle model. The result is the Spiral model, which was first presented in 1986. The new model aims at incorporating the strengths and avoiding the different of the other models by shifting the management emphasis to risk evaluation and resolution.


Each phase in the spiral model is split into four sectors of major activities.

These activities are as follows:

Objective setting:

This activity involves specifying the project and process objectives in terms of their functionality and performance.

Risk analysis:

It involves identifying and analyzing alternative solutions. It also involves identifying the risks that may be faced during project development.

Engineering:

This activity involves the actual construction of the system.

Customer evaluation:

During this phase, the customer evaluates the product for any errors and modifications.


RAD Model

RAD (rapid application development) is a concept that products can be developed faster and of higher quality through:Also sometimes referred to as Rapid Prototyping, Rapid Application Development is a method of decreasing the time taken to design software systems. It uses incremental development and the construction of prototypes - and encourages constant feedback from users/customers by keeping lines of communication clear - with the end goal of expediting the development cycle.

* Gathering requirements using workshops or focus groups
* Prototyping and early, reiterative user testing of designs
* The re-use of software components
* A rigidly paced schedule that defers design improvements to the next product
version
* Less formality in reviews and other team communication

Rapid Application Development encouraged the creation of quick-and-dirty prototype-style software which fulfilled most of the user’s requirements but not necessarily all. Development would take place in a series of short cycles, called time boxes, each of which would deepen the functionality of the application a little more. Features to be implemented in each time box were agreed in advance and this game plan rigidly adhered to. The strong emphasis on this point came from unhappy experience with other development practices in which new requirements would tend to be added as the project was evolving, caused massive chaos and disrupting the already carefully prepared plans and development schedules. Rapid Application Development methodology advocated that development be undertaken by small, experienced teams using CASE (Computer Aided Software Engineering) tools to enhance their productivity.



Prototype model


The Prototyping model, also known as the Evolutionary model, came into SDLC because of certain failures in the first version of application software. A failure in the first version of an application inevitably leads to need for redoing it. To avoid failure of SDLC, the concept of Prototyping is used. The basic idea of Prototyping is that instead of fixing requirements before the design and coding can begin, a prototype is to understand the requirements. The prototype is built using known requirements. By viewing or using the prototype, the user can actually feel how the system will work.

The prototyping model has been defined as:

“A model whose stages consist of expanding increments of an operational software with the direction of evolution being determined by operational experience.”

Prototyping Process


The following activities are carried out in the prototyping process:

The developer and die user work together to define the specifications of the critical parts of the system.
The developer constructs a working model of the system.
The resulting prototype is a partial representation of the system.
The prototype is demonstrated to the user.
The user identifies problems and redefines the requirements.
The designer uses the validated requirements as a basis for designing the actual or production software

Prototyping is used in the following situations:

When an earlier version of the system does not exist.
When the user's needs are not clearly definable/identifiable.
When the user is unable to state his/her requirements.
When user interfaces are an important part of the system being developed.


Bin – Bang Model


The Big- Bang Model is the one in which we put huge amount of matter (people or money) is put together, a lot of energy is expended – often violently – and out comes the perfect software product or it doesn’t.

The beauty of this model is that it’s simple. There is little planning, scheduling, or Formal development process. All the effort is spent developing the software and writing the code. It’s and ideal process if the product requirements aren’t well understood and the final release date is flexible. It’s also important to have flexible customers, too, because they won’t know what they’re getting until the very end.


Incremental Model

Incremental model is an evolution of waterfall model. The product is designed, implemented, integrated and tested as a series of incremental builds. It is a popular model software evolution used many commercial software companies and system vendor.
Incremental software development model may be applicable to projects where:
- Software Requirements are well defined, but realization may be delayed.
- The basic software functionality are required early.

Advantages are

- Generates working software quickly and early during the software life cycle.
- More flexible - less costly to change scope and requirements.
- Easier to test and debug during a smaller iteration.
- Easier to manage risk because risky pieces are identified and handled during
its iteration.

Disadvantages are

- Each phase of an iteration is rigid and do not overlap each other.
- Problems may arise pertaining to system architecture because not all
requirements are gathered up front for the entire software life cycle.


V model

The V-Model, also called the Vee-Model, is a product-development process originally developed in Germany for government defense projects. It has become a common standard in software development. The V-Model gets its name from the fact that the process is often mapped out as a flowchart that takes the form of the letter V.

The development process proceeds from the upper left point of the V toward the right, ending at the upper right point. In the left-hand, downward-sloping branch of the V, development personnel define business requirements, application design parameters and design processes. At the base point of the V, the code is written. In the right-hand, upward-sloping branch of the V, testing and debugging is done. The unit testing is carried out first, followed by bottom-up integration testing. The extreme upper right point of the V represents product release and ongoing support.

The V-Model has gained acceptance because of its simplicity and straightforwardness. However, some developers believe it is too rigid for the evolving nature of IT (information technology) business environments.

The V-model consists of a number of phases. The Verification Phases are on the right hand side of the V, the Coding Phase is at the bottom of the V and the Validation Phases are on the left hand side of the V .

Verification Phases

Requirements analysis

In the Requirements analysis phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned about establishing what the ideal system has to perform. However it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated.

The user requirements document will typically describe the system’s functional, physical, interface, performance, data, security requirements etc as expected by the user. It is one which the business analysts use to communicate their understanding of the system back to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase. See also Functional requirements, it is to develop in testing in now a days

System Design

Systems design is the phase where system engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly.

The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold example business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system testing are prepared in this phase.

Architecture Design

The phase of the design of computer architecture and software architecture can also be referred to as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in this phase.

Module Design

The module design phase can also be referred to as low-level design. The designed system is broken up into smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudocode:

* database tables, with all elements, including their type and size
* all interface details with complete API references
* all dependency issues
* error message listings
* complete input and outputs for a module.

The unit test design is developed in this stage.

Validation Phases

Unit Testing

In the V-model of software development, unit testing implies the first stage of dynamic testing process. According to software development expert Barry Boehm, a fault discovered and corrected in the unit testing phase is more than a hundred times cheaper than if it is done after delivery to the customer.

It involves analysis of the written code with the intention of eliminating errors. It also verifies that the codes are efficient and adheres to the adopted coding standards. Testing is usually white box. It is done using the Unit test design prepared during the module design phase. This may be carried out by software developers.

Integration Testing

In integration testing the separate modules will be tested together to expose faults in the interfaces and in the interaction between integrated components. Testing is usually black box as the code is not directly checked for errors.

System Testing

System testing will compare the system specifications against the actual system. The system test design is derived from the system design documents and is used in this phase. Sometimes system testing is automated using testing tools. Once all the modules are integrated several errors may arise. Testing done at this stage is called system testing.

User Acceptance Testing

Acceptance testing is the phase of testing used to determine whether a system satisfies the requirements specified in the requirements analysis phase. The acceptance test design is derived from the requirements document. The acceptance test phase is the phase used by the customer to determine whether to accept the system or not. -to develop

Wednesday, 6 January 2010

Test Maturity Model Integration(TMMi)


Introduction to Test Maturity Model Integration(TMMi)


It is a detailed model for test process improvement and positioned as being complementary to the CMMi.

It provides a structured presentation of maturity levels, allowing for standard TMMi
assessments and certification, enabling a consistent deployment of the standards and the collection of industry metrics.

TMMi has a rapidly growing uptake across Europe, Asia and USA and owes its popularity upon it being the only independent test process measurement method.


Why we need TMMi ?

Despite encouraging results with various quality improvement approaches,the software industry is still far from zero defects.

Limited attention is given to Testing in the various software Process improvement models such as CMM or CMMi.

TMMi is a detailed model for test process improvement and It can be complemented with any Process improvement model or it can be used as STAND ALONE Model.

The TMMi has been developed to support organizations at evaluating and improving their test process.

TMMi maturity criteria will improve the test process and have a positive impact on product quality, test engineering productivity, and cycle-time effort.


Background and History of TMMi


The TMM framework has been developed by the Illinois Institute of Technology where as TMMi framework has been developed by TMMi foundation as a guideline and reference framework for test process improvement.

TMM also uses the concept of maturity levels for process evaluation and improvement.
In addition to that process areas,maturity goals and key practices are also
identified.

Sources

The development of the TMMi has used the TMM Framework developed by Illinois Institute of Technology.

It was also guided by the work done on the CMMi, a process improvement model that has widespread support in the IT industry.

TMMi has been developed as a stage model. (The stage model uses predefined sets of Process areas to define an improvement path of an organization.) but can be used as continuous model.

Last sources of the TMMi development include the Gelperin and Hetzel’s Evolution of testing model which describes the evolution of the test process over a 40 year period.

Scope

Software and System Engineering


TMMi is intended to support testing activities and test process improvement in both
the systems engineering and Software Engineering discipline.

Test Levels

Some models for test process improvement focus mainly on high level testing, The
TMMi addresses all the test levels and aspects of structured testing. With respect to
dynamic testing,both low level testing and high level testing are within the scope of
the TMMi

Levels of TMMi

TMMi consists of 5 maturity levels

Level 1: Initial
Level 2: Defined
Level 3: Integrated
Level 4: Management and Measurement
Level 5: Optimized


Levels and the Process Areas



Types of TMMi Models.

Staged Representation

Within the staged representation the architecture prescribes the stages that an
organization must proceed through in an orderly fashion to improve its testing
process.

Continuous Representation

Within the continuous representation there is no fixed level set of levels or stages to proceed through. An organization applying the continuous representation can select
areas for improvement from many different categories.

Level 1 : Initial

Objectives

- Objective of testing is that the software should work correctly.
- This level lacks trained staff, resources and tools.
- Software get delivered without Quality assurance.

Main Goal
Software should run without major failure.


Level 2: Defined


Objectives

- This Level separates testing from debugging are considered distinct activities.
- Testing Phase comes after coding.
- Primary goal of testing is to show software meets specifications.
- Basic testing techniques and methods are in place.

Main Goals

- Develop Testing and Debugging goals and policies.
- Initiate a Test Planning Process
- Institutionalize basic Testing techniques and methods

Level 3: Integrated

Objectives

- Testing gets integrated into entire life cycle.
- Test Objectives are based on requirements.
- Test Organization Exists
- Testing recognized as a professional activity.

Main Goals
- Establish a test organization
- Establish a technical training Program.
- Integrate testing into the software life cycle
- Control and monitor the Testing Process.

Level 4 : Management and Measurement

Objectives

-Testing is a measured and quantified process.
-Review at all development phases are now recognized as tests.
-Products tested for quality attributes such as reliability, usability and
maintainability.
-Test cases are collected and recorded in a test database for reuse and regression
testing.
-Defects are logged and given severity levels.

Main Goals
-Establish an organization wide review program.
-Establish a test measurement program.
-Software quality Evaluation.

Level 5 : Optimized

Objectives

-Testing is defined and managed.
-Testing costs and effectiveness can be monitored.
-Testing can be fine tuned and continuously improved.
-Defect prevention and quality control are practiced.
-Automated Tools a primary part of testing process.
-Tools provide support for test case design and defect collection and analysis.
-Test related metrics also have tool support.
-Process reuse is practiced.

Main Goals

-Defect Prevention
-Quality Control
-Test Process Optimization

Structure of TMMi



Components of TMMi

Maturity Level:
A maturity level within the TMMi can be regarded as a degree of
organizational test process quality. It is defined as an evolutionary plateau of
test process improvement.

Process Areas:
Process areas identify the issues that must be addressed to achieve a maturity level. Each process area identifies a cluster of test related activities.When the practices are all performed a significant improvement in activities related to that area will be made.

Specific Goals
A specific goal describes a unique characteristic that must be present to satisfy the process area. A specific goal is a required model component and is used in assessments to help determine whether a process area is satisfied.

Generic Goals
Generic goals appear near the end of a process area and are called ‘generic’ because the same goal statement appears in multiple process areas.

Specific Practices
A specific practice is the description of an activity that is considered important in achieving the associated specific goal.

Generic Practices
Generic practices appear near the end of a process area and called ‘generic’ because the same practice appears in multiple process areas. A generic practice is the description of an activity that is considered important in achieving the associated generic goal.

Can we assess our maturity on our own?

The Answer is “YES”Assessment makes a clear distinction between practices that are required goals) or recommended (specific practices, typical work products, etc.) to implement.

- The organization must feel the ownership.
- Support of Senior Management.
- A TMMI Framework to refer.
- A technically competent Team.

Assessment Components

Required Components
Required components describe what an organization must achieve to satisfy a process area.
Expected Components
Expected components describe what an organization will typically implement to achieve a required component.
Informative Components
Informative components provide details that help organizations get started in
thinking about how to approach the required and expected components.


What should be the approach?


- Assess your Current Testing Process.
- Develop your Current Maturity level.
- Develop and Implement an improvement plan.
- Repeat the assessment to demonstrate that improvements have been made.

What is the goal of a software tester?

The goal of a Software Tester is to find bugs, and find them as early as possible and make sure they get fixed.

Eight Basic Principles of Testing


- Define the expected output or result.
- Don't test your own programs.
- Inspect the results of each test completely.
- Include test cases for invalid or unexpected conditions.
- Test the program to see if it does what it is not supposed to do as well as what it
is supposed to do.
- Avoid disposable test cases unless the program itself is disposable.
- Do not plan tests assuming that no errors will be found.

Best Testing Practices to be followed during testing

- Testing and evaluation responsibility is given to every member, so as to generate
team responsibility among all.
- Develop Master Test Plan so that resource and responsibilities are understood and
assigned as early in the project as possible.
- Systematic evaluation and preliminary test design are established as a part of all
system engineering and specification work.
- Testing is used to verify that all project deliverables and components are
complete, and to demonstrate and track true project progress.
- Risk prioritized list of test requirements and objectives (such as requirements-
based, design-based, etc) are developed and maintained.
- Conduct Reviews as early and as often as possible to provide developer feedback
and get problems found and fixed as they occur.

What is a Defect?

A Defect is an undesirable state. In order to know what a defect is we must first define a desirable state. For example, if we believe a desirable state for a corporation is that a phone call is answered by a person, then if it is not answered by a person that would be considered an undesirable state.

What is Error,Fault and Failure?

An Error is a human action that produces an incorrect result.

A Fault is a manifestation of an error in software. Faults are also known as Defaults or Bugs.

A fault, if encountered, may cause a Failure, which is a deviation of the software from its existing delivery or service.

What is a Quality Software Product?

There are two view Points of quality software

IT's view of quality software means meeting requirments.It means that the person building the software builds it in accordance with requirement.
User's view of quality software means fit for use.it means that the software produced by IT meets the user’s need regardless of the software requirements.

These Two View points causes two gaps.



The first gap is the IT gap. It is the gap between what is specified to be delivered, meaning the documented requirements and internal IT standards, and what is actually built.

The second gap is between what IT actually delivers compared to what the user wants.

The role of software testing helps to close the two gaps.

The more close the gap would be the more quality product it would be.....


How User's gap can be reduced?This can be done by the following

- Customer surveys
- JAD (joint application development)
sessions the IT and user come together and negotiate and agree upon requirements.
- More user involvement while building information products

Why Testing is needed?

Testing is necessary because the existence of faults in any product is inevitable.
Testing improves the reliability of the product.Imagine you are driving a car whose bracks are never been tested.Would you feel safe in driving that car?

Testing helps to deliver quality products that satisfy user’s requirements, needs and expectations. If done poorly,defects are found during operation,it results in high maintenance cost and user dissatisfaction,It may also cause failure.

Raising the reliability of the product means finding and removing errors. Hence one should not test a product to show that it works; rather, one should start with the assumption that the program contains errors and then test the program to find as many of the errors as possible.

What is Testing?

Testing is the process of measuring the Quality of the product.It could be anything from a paper,pen,car to a software.Through testing we measure how closely we have achieved the quality by testing the relevant factors such as correctness, reliability, usability, maintainability, reusability, testability etc.