ISTQB Syllabus

Back to the table of content

Testing Throughout the Software Life Cycle (K2)

2.1   Software Development Models (K2)

Terms

Commercial Off-The -Shelf (COTS), iterative-incremental development model, validation, verification, V-model

Background

Testing does not exist in isolatio n; test activities are relat ed to softwar e development activities. Different development life cycle models need different approaches to testing.

2.1.1 V-model (Sequential Development Model) (K2)

Although variants of the V-model exist, a com mon type o f V-model us es four test levels, correspo nding to the four development levels .

The four levels used in this sylla bus are:

o Com ponent (unit ) testing

o Inte gration testin g

o System testing

o Acc eptance testing

In practi ce, a V-mod el may have more, fewer or different levels of dev elopment a nd testing, dependi ng on the pr oject and the software product. For example, ther e may be co mponent integrati on testing after compone nt testing, and system in tegration te sting after system testing.

Software work produ cts (such as business sc enarios or use cases, re quirements specifications, design documents and code) pro duced during developm ent are often the basis of testing in o ne or more tes t levels. References for generic wor k products include Capa bility Maturity Model Inte gration (CMMI) or ‘Software life cycle pr ocesses’ (IE EE/IEC 1220 7). Verification and valid ation (and early test design) can be c arried out d uring the de velopment of the softwar e work prod ucts.

2.1.2 Interative incremental Development Models (K2)

Iterative -incremental developme nt is the proc ess of estab lishing requirements, designing, building and testing a system in a series of short deve lopment cyc les. Examples are: proto typing, Rapid Application Develop ment (RAD), Rational Unified Process (RUP) and agile devel opment models. A system that is produced using these models may be teste d at several test levels d uring each iteration . An increme nt, added to others deve loped previo usly, forms a growing p artial system, which sh ould also be tested. Reg ression testing is increasingly important on all ite rations after the first one . Verification and validation can be c arried out on each increm ent.

2.1.3 Testing within a Life Cycle Model (K2 )

In any life cycle model, there are several characteristics o f good testi ng:

o For every develo pment activity there is a corresponding testing a ctivity

o Eac h test level has test obje ctives specific to that lev el

o The analysis an d design of t ests for a given test level should begin during the corresponding dev elopment activity

o Testers should b e involved i n reviewing d ocuments as soon as dr afts are available in the dev elopment life cycle

Test lev els can be c ombined or reorganized depending on the nature of the proje ct or the sys tem architecture. For exa mple, for th e integration of a Comme rcial Off-Th e-Shelf (COT S) software product into a syste m, the purch aser may perform integra tion testing at the syste m level (e.g., integrati on to the infrastructure and other systems, or system deploym ent) and acceptance te sting (function al and/or non-functional, and user a nd/or operational testing).

 

2.2   Test Levels (K2)

Terms

Alpha testing, beta t esting, component testin g, driver, field testing, fu nctional requirement,integrati on, integrati on testing, n on-functional requirement, robustness testing, stu b, system te sting, test environment, tes t level, test-driven development, user acceptance testing

Background

For each of the test levels, the following can be identified: the generic objectives, the work product(s) being refe renced for d eriving test cases (i.e., t he test basis ), the test o bject (i.e., wh at is being te sted), typical defects and failures to b e found, test harness requirements and tool support, and spe cific approac hes and responsibilities.

Testing a system’s configuration data shall b e considered during test planning,

2.2.1 Component Testing (K2)

Test bas is:

o Com ponent requ irements

o Deta iled design

o Code

Typical test objects:

o Com ponents

o Programs

o Data conversion/ migration programs

o Data base modules

Component testing (also known as unit, module or progra m testing) searches for defects in, and verifies the functioni ng of, software modules, programs, o bjects, class es, etc., that are separately testable. It may be done in isolation from the rest of the system, depending on the context of the develop ment life cycle and the s ystem. Stubs , drivers an d simulators may be use d.

Compon ent testing may include t esting of fun ctionality and specific n on-functional characteris tics, such as resource-behavior (e.g., searching fo r memory le aks) or robu stness testi ng, as well as structura l testing (e. g., decision c overage). Test cases are derived fro m work prod ucts such as a specifica tion of the component, the software design or th e data model.

Typically , component testing occurs with access to the co de being te sted and wit h the support of a develop ment environ ment, such as a unit test framework or debugging tool. In practice, comp onent testing u sually involves the programmer who wrote the co de. Defects are typically fixed as so on as they are found, witho ut formally managing th ese defects.

One app roach to co mponent test ing is to prepare and au tomate test c ases before coding. Thi s is called a test-first app roach or test-driven development. T his approach is highly iterative and is based o n cycles of developing test cases, th en building and integratin g small pieces of code, and executing the compo nent tests c orrecting an y issues and iterating unt il they pass.

2.2.2   Integration Testing (K2)

Test bas is:

o Software and system design

o Architecture

o Workflows

o Use cases

Typical test objects:

o Subsystems

o Data base implementation

o Infrastructure

o Interfaces

o System configuration and configuration data

Integration testing tests interfaces between components, interactions with differen t parts of a system, such as the operating sy stem, file system and ha rdware, and interfaces b etween systems.

There may be more than one level of integra tion testing a nd it may be carried out on test objects of varying size as follo ws:

  1. Com ponent inte gration testin g tests the i nteractions b etween software components and is done after component testing
  2. System integration testing tests the inter actions between differen t systems or between hard ware and so ftware and may be done after syste m testing. In this case, the developin g orga nization ma y control onl y one side o f the interface. This might be considered as a risk .

Business processes implem ented as workflows may involve a se ries of syste ms. Cross-platform issues may be significant

The gre ater the scop e of integration, the more difficult it b ecomes to i solate defects to a speci fic compon ent or syste m, which may lead to inc reased risk a nd addition al time for tro ubleshooting.

Systema tic integratio n strategies may be based on the sy stem archite cture (such as top-down and bottom-u p), function al tasks, tran saction proc essing sequ ences, or so me other aspect of the s ystem or components. In or der to ease fault isolation and detect defects early, integration should normally be incre mental rather than “big bang”.

Testing of specific n on-functional characteristics (e.g., performance) may be included in integration testing as well as fun ctional testi ng.

At each stage of integration, testers concentrate solely on the integra tion itself. Fo r example, if they are integ rating modu le A with mo dule B they are intereste d in testing the commun ication between the modules, not the functionality of the individual module as that was done durin g component testing. Both functio nal and structural approaches may b e used.

Ideally, testers should understand the architecture and influence integ ration planning. If integration tests are planned before compon ents or systems are built, those components can be built in t he order re quired for m ost efficient testing.

2.2.3   System Testing (K 2)

Test bas is:

o System and software requirement specification

o Use cases

o Fun ctional specification

o Risk analysis re ports

Typical test objects:

o System, user and operation manuals

o System configuration and configuration d ata

System testing is co ncerned with the behavio r of a whole system/pro duct. The tes ting scope s hall be clearly addressed in the Master and/or Level Test Pla n for that test level.

In syste m testing, th e test environment should correspon d to the final target or pr oduction environ ment as much as possibl e in order to minimize th e risk of environment-spe cific failures not being fo und in testin g.

System testing may include tests based on risks and/or on requirements specifica tions, busin ess process es, use cases, or other high level text descriptions or models of system b ehavior, interacti ons with the operating sy stem, and system resources.

System testing should investigat e functional and non-fun ctional requi rements of t he system, and data qua lity characte ristics. Test ers also need to deal wit h incomplete or undocum ented requirem ents. Syste m testing of functional requirements starts by usi ng the most appropriate specifica tion-based ( black-box) techniques fo r the aspect of the system to be tested. For exa mple, a decision table may be created for combinations of effects described i n business r ules. Structure-based te chniques (w hite-box) m ay then be u sed to assess the thoroughness of the testing with respect to a structur al element, such as men u structure or web page navigation (s ee Chapter 4).

An indep endent test team often c arries out s ystem testin g.

2.2.4 Acceptance Testing (K2)

Test bas is:

o User requiremen ts

o System requirem ents o Use cases

o Business processes o Risk analysis re ports

Typical test objects:

o Business processes on fully integrated s ystem

o Operational and maintenance processes

o User procedures

o For ms

o Reports

o Configuration data

Acceptance testing is often the r esponsibility of the customers or use rs of a syste m; other stakeholders may be involved as well.

The goal in acceptan ce testing is to establish confidence in the syste m, parts of th e system or specific non-functional character istics of the system. Finding defects is not the ma in focus in accepta nce testing. Acceptance testing may assess the s ystem’s rea diness for de ployment a nd use, although it is no t necessaril y the final le vel of testing. For example, a large-s cale system integrati on test may come after t he acceptance test for a system.

Acceptance testing may occur at various tim es in the life cycle, for example:

o ACOTS software product m ay be acceptance tested when it is installed or integrated

o Acceptance testing of the usability of a c omponent may be done during component testing

o Acceptance testing of a new functional enhancement may come before system testing

Typical forms of acc eptance testing include t he following:

User acceptance testing

Typically verifies the fitness for use of the sy tem by business users.

Operational (acceptance) testing

The acceptance of t he system by the system administrators, including:

o Testing of backup/restore

o Disa ster recovery o User manageme nt

o Mai ntenance tasks

o Data load and migration tasks

o Periodic checks of security vulnerabilities

Contract and regulation acceptance testing

Contract acceptance testing is performed against a contr act’s accept ance criteria for producing custom- developed s oftware. Acceptance crit eria should b e defined w hen the parties agree to the contract . Regulation acceptance testing is performed against any reg ulations that must be ad hered to, such as government, legal or safety regul ations.

Alpha and beta (or field) testing

Developers of marke t, or COTS, software often want to get feedback from potential or existing customers in their market before the software product is put up for sale commercially. Alpha t esting is performed at the d eveloping o rganization’s site but not by the developing team. Beta testing , or field-testing, is perfo rmed by customers or po tential custo mers at their own locatio ns.

Organiz ations may u se other ter ms as well, s uch as facto ry acceptance testing an d site accep tance testing f or systems t hat are tested before and after being moved to a customer’s s ite.

 

2.3   Test Types (K2)

Terms

Black-box testing, code coverage, functional testing, interoperability testing, load testing, maintainability testing, performance testing, portability testing, reliability testing, security testing, stress testing, structural testing, usability testing, white-box testing

Background

A group of test activities can be aimed at verifying the software syste m (or a part of a system) based on a spe cific reason or target for testing.

A test type is focuse d on a particular test obj ective, which could be an y of the follo wing:

o A fu nction to be performed by the software

o A no n-functional quality characteristic, su ch as reliability or usability o The structure or architecture of the software or syste m

o Change related, i.e., confirming that defects have be en fixed (con firmation tes ting) and lo oking for u nintended changes (reg ression testi ng)

A model of the software may be developed and/or used i n structural testing (e.g., a control flo w model o r menu structure model), non-functio nal testing (e .g., performance model, usability mo del security threat modeling), and fu nctional testing (e.g., a process flow model, a state transition model or a plai n language s pecification ).

2.3.1 Testing of Function (Functional Testing) (K2)

The func tions that a system, subsystem or co mponent are to perform may be described in work products such as a requirements specification, use cases , or a functi onal specific ation, or the y may be undocumented. T he functions are “what” the system does.

Function al tests are based on fu nctions and features (de scribed in documents or understood by the testers) and their int eroperability with specific systems, a nd may be performed at all test level s (e.g., tests for components may be ba sed on a co mponent specification).

Specific ation-based techniques may be used to derive te st conditions and test ca ses from the function ality of the s oftware or sy stem (see C hapter 4). Functional tes ting conside rs the exter nal behavior of the softw are (black-box testing)

A type of functional testing, security testing, investigates the functions (e.g., a fire wall) relating to detectio n of threats, such as viruses, from m alicious outsiders. Anoth er type of fu nctional testing, interoperability testin g, evaluates the capability of the sof tware produ ct to interact with one or more specified component s or system s.

2.3.2 Testing of Non-functional Software Characteristics (Non-functional Testing) (K2)

Non-functional testing includes, but is not limited to, perfo rmance testing, load testing, stress testing, usability testing, maintain ability testing, reliability testing and portability te sting. It is th e testing of “how” the s ystem works.

Non-fun ctional testing may be pe rformed at a ll test levels. The term non-functiona l testing describes the tests required to measure ch aracteristics of systems and software that can be quantified on a varying scale, such as response times for performance te sting. These tests can be referenced to a quality m odel such as the one de fined in ‘Software Engineering – Software Product Quality’ (ISO 9126). N on-function al testing con siders the external beha vior of the software and in most cas es uses black-box test design techniques to accomplish that.

2.3.3 Testing of Software Structure/Architcture (Structural Testing) (K2)

Structur al (white-box) testing may be perform ed at all test levels. Structural techniques are best used aft er specification-based techniques, in order to hel p measure t he thorough ness of testin g through assessment of coverage of a type of structure.

Coverage is the exte nt that a str ucture has b een exercise d by a test s uite, expressed as a percenta ge of the items being covered. If cov erage is not 100%, then more tests may be desi gned to test th ose items th at were missed to increa se coverag e. Coverage techniques are covered in Chapter 4.

At all tes t levels, but especially in component testing and component integration t esting, tools can be used to measure the code co verage of ele ments, suc as statements or decisions. Structural testing m ay be base d on the arc hitecture of the system, s uch as a calling hierarchy.

Structural testing ap proaches can also be applied at syst em, system integration or acceptanc e testing l evels (e.g., t o business m odels or me nu structures).

2.3.4 Testing Related to Changes : Re-testing and Regression Testing (K2)

After a defect is dete cted and fix ed, the softw are should b e re-tested to confirm that the original defect h as been successfully rem oved. This is called confirmation. D ebugging (lo cating and fi xing a defect) i s a develop ment activity, not a testin g activity.

Regression testing is the repeate d testing of an already t ested program, after mo dification, to discover any defects introduced or uncovere d as a result of the chan ge(s). These defects ma y be either in the software being tested, or in another related o r unrelated software co ponent. It is perform ed when the software, or its environm ent, is changed. The ex tent of regression testing is based o n the risk of not finding defects in software that was working p reviously.

Tests should be rep eatable if they are to be u sed for confirmation testing and to assist regres sion testing.

Regression testing m ay be performed at all t est levels, an d includes functional, no n-functional and structura l testing. Re gression tes t suites are run many tim es and gen erally evolve slowly, so regressi on testing is a strong can didate for a utomation.

 

2.4   Maintenance Testing (K2) 

Terms

Impact analysis, maintenance testing

Background

Once deployed, a software system is often in service for years or decades. Durin g this time the system, its configuration data, or its environm ent are often corrected, changed or extended. The planning of releases in advance is crucial for successful maintenance testing. A distinction ha s to be made be tween plann ed releases and hot fixes. Maintena nce testing i s done on an existing operational system, and is triggered by modifications, mig ration, or retirement of the software or system.

Modifications include planned enhancement changes (e. g., release-based), corrective and emergen cy changes, and chang es of environ ment, such as planned operating sy stem or database upgrades, planned upgrade of Commercial-O ff-The-Shelf software, o r patches to correct newly exposed or discovered vulnerabilities of the o perating sys tem.

Mainten ance testing for migratio n (e.g., from one platform to another) should inclu de operatio nal tests of the new environment as well as of th e changed software. Migration testin g (conversio n testing) is also need ed when data from anoth er applicatio n will be mi grated into th e system be ing maintained.

Mainten ance testing for the retire ment of a s ystem may in clude the testing of data migration or archivin g if long data -retention p eriods are required.

In addition to testing what has be en changed, maintenance testing in cludes regression testing to parts of the system that have not been chang ed. The scope of mainte nance testin g is related to the risk of the change, th e size of the existing sy stem and to the size of th e change. D epending o n the changes, maintenance testing may be done at any or all test levels a nd for any or all test types. Determi ning how the existing sys tem may be affected by changes is called impact analysis, and is used to help decide how much re gression te sting to do. The impact analysis may be used to determin e the regres sion test suite.

Maintenance testing can be difficult if specifications are out of date or missing, or testers with domain knowledge are not available.