ISTQB Syllabus

Back to the table of content

Test Management (K3)

5.1   Test Organization (K2)

Terms

Tester, test leader, t est manager

5.1.1           Test Organization and Inde pendence (K2)

The effectiveness of finding defects by testin g and review s can be improved by us ing independent testers. Options for i ndependence include th e following:

o No independent testers; dev elopers test their own code o Independent testers within th e development teams

o    Independent test team or gro up within th e organizatio n, reporting to project management or exec utive mana gement

o Independent testers from the business o ganization o r user comm unity

o    Independent test specialists for specific test types such as usability testers, s ecurity testers or certification teste rs (who certify a software product ag ainst standa rds and regulations)

o Independent testers outsour ced or exter nal to the org anization

For larg e, complex or safety criti cal projects, it is usually best to have multiple lev els of testing, with some or all of the lev els done by independent testers. Development staff may participate in testing, especially at the low er levels, but their lack of objectivity often limits th eir effectiveness. The indepen dent testers may have the authority to require an d define test processes a nd rules, but testers should take o n such process-related r oles only in the presence of a clear management mandate to do so.

The benefits of inde pendence in clude:

o Independent testers see oth er and differe nt defects, and are unbiased

o        An i ndependent tester can v erify assump tions people made during specificati on and implementation of the system

Drawba cks include:

o Isola tion from th e developme nt team (if tr eated as totally independent) o Developers may lose a sense of responsibility for qu ality

o Independent tes ters may be seen as a bottleneck or blamed for delays in rel ease

Testing tasks may b e done by pe ople in a specific testing role, or ma y be done by someone in another role, such a s a project m anager, quality manage , developer, business an d domain e xpert, infrastructure or IT operations.

5.1.2           Tasks of the Test Leader an d Tester (K1)

In this s yllabus two test positions are covered , test leader and tester. The activities and tasks perform ed by people in these tw o roles depend on the pr oject and pr oduct context, the people in the roles, an d the organization.

Sometim es the test leader is called a test manager or test coordinato r. The role of the test leader may be performed by a project m anager, a development manager, a quality assu rance mana ger or the manager of a tes t group. In l arger projects two positions may exist: test leader and test manager. Typically t he test leade r plans, mo nitors and co ntrols the testing activiti es and tasks as defined in Section 1.4.

o Contribute the te sting perspe ctive to othe r project activities, such as integratio n planning

o        Plan the tests – considering the context and understa nding the test objectives and risks – inclu ding selecti ng test appro aches, esti mating the ti me, effort and cost of testing, acquiri ng resources, defining test levels, cycles, an d planning incident management

o        Initia te the specification, pre paration, imp lementation and execution of tests, monitor the test results and check the exit criteria

o        Adapt planning b ased on test results and progress (sometimes d ocumented i n status rep orts) and take any action necessary to compe nsate for pro blems

o Set up adequate configuratio n management of testw are for trace ability

o        Intro duce suitable metrics fo r measuring test progress and evalua ting the quality of the te sting and the product

o Decide what sho uld be auto mated, to what degree, and how

o Sele ct tools to s upport testing and organize any training in tool us e for testers

o Decide about th e implement ation of the test environm ent

o Write test summary reports b ased on the information gathered du ring testing

Typical tester tasks may include:

o Review and cont ribute to test plans

o Analyze, review and assess user requirements, specifications an d models for testability o Cre ate test specifications

o        Set up the test environment (often coordinating with s ystem administration and network management)

o Prep are and acq uire test data

o        Implement tests on all test levels, execute and log th e tests, eval uate the resu lts and docu ment the deviations fr om expected results

o Use test adminis tration or m anagement tools and test monitoring tools as required o Auto mate tests (may be sup ported by a d eveloper or a test autom ation exper t)

o Measure perform ance of co mponents and systems (if applicable) o Review tests de veloped by others

People who work on test analysi s, test desig n, specific test types or t est automati on may be specialis ts in these roles. Depending on the test level and the risks related to the product and the project, different people may tak e over the role of tester, keeping som e degree of independence. Typically testers at t he component and integration level w ould be developers, testers at the accepta nce test level would be business experts and users, and teste rs for operational accep tance testing w ould be ope rators.

5.2   Test Planning and Estimation (K3)

Terms

Test app roach, test strategy

5.2.1           Test Pla nning (K2)

This section covers the purpose of test planning within de velopment and implementation proje cts, and for maintenance activities. Planning may be docume nted in a master test pla n and in sep arate test plans for test levels such as system testing and acceptance testin g. The outli ne of a test-planning document i s covered by the ‘Standa rd for Softw are Test Doc umentation’ (IEEE Std 8 29-1998).

Planning is influence d by the test policy of th e organizatio n, the scop e of testing, objectives, risks, constrai nts, criticality , testability and the availability of resources. As the project a nd test plann ing progress , more infor mation becomes available and more detail can b e included in the plan.

Test pla nning is a co ntinuous act ivity and is p erformed in all life cycle processes a nd activities. Feedbac k from test activities is used to recog nize changi ng risks so t hat planning can be adjusted.

5.2.2           Test Pla nning Activities (K3 )

Test pla nning activities for an entire system o r part of a s ystem may i nclude: o Dete rmining the scope and risks and ide ntifying the o bjectives of testing

o        Defining the overall approac h of testing, including the definition o f the test lev els and entry and exit criteria

o    Inte grating and c oordinating the testing activities into the software life cycle a ctivities (acquisition, supply, development, opera tion and maintenance)

o Making decisions about what to test, wha t roles will perform the t est activities , how the tes t

o

activ ities should be done, and how the test results will be evaluat ed

 

Sch eduling test analysis and design activ ities

 

o Sch eduling test implementation, executio n and evaluation

 

o

Assigning resou rces for the different acti vities define d

 

o

Defining the am ount, level of detail, structure and tem plates for th e test docu mentation

 

o        Sele cting metric s for monitoring and controlling test p reparation and executio n, defect resolution and risk issues

o        Setting the level of detail for test procedu res in order to provide enough inform ation to sup port reproducible tes t preparation and execution

5.2.3           Entry Criteria (K2)

Entry criteria define when to start testing suc h as at the beginning of a test level or when a set of tests is ready for exe cution.

Typically entry criteria may cover the following: o Test environmen t availability and readine ss o Test tool readine ss in the tes t environment o Testable code a vailability

o Test data availa bility

5.2.4           Exit Crit eria (K2)

Exit crite ria define w hen to stop testing such as at the en d of a test le vel or when a set of tests has achieve d specific goal.

Typically exit criteria may cover the following:

 

o

Tho roughness m easures, such as cover age of code, functionality or risk

 

o

Esti mates of defect density o r reliability m easures

 

o

Cost

 

o

Residual risks, such as defe cts not fixed or lack of te st coverage in certain ar eas

o

Sch edules such as those ba sed on time to market

 

5.2.5           Test Est imation (K 2)

Two app roaches for the estimati on of test effort are:

o        The metrics-based approach: estimating the testing e ffort based on metrics o f former or s imilar proj ects or base d on typical values

o        The expert-based approach: estimating the tasks bas ed on estimates made by the owner of the tasks or by experts

Once th e test effort is estimated, resources can be identified and a sc hedule can be drawn up.

The testing effort ma y depend on a number o f factors, inc luding:

o        Characteristics o f the product: the quality of the specification and other information used for test models (i.e., the test basis), the size of th e product, t he complexity of the prob lem domain, the requ irements for reliability an d security, a nd the requirements for documentation

o        Characteristics o f the development proce ss: the stability of the organization, tools used, te st process, skills of the people involved, an d time pressure

o The outcome of testing: the number of d efects and the amount of rework requ ired

5.2.6           Test Stra tegy, Tes t Approa ch (K2)

The test approach is the implem entation of th e test strategy for a specific project. The test app roach is defined and refined in the test plans and test designs. It typically in cludes the d ecisions ma de based o n the (test) p roject’s goal and risk as sessment. It is the startin g point for planning the t est process, for selectin g the test design techniques and test types to be applied, and for defining the entry and exit criteria .

The sele cted approach depends on the conte xt and may consider ris ks, hazards and safety, available resources and skills, the technolog y, the nature of the system (e.g., custom built vs. COTS), test objectives, and regulations.

Typical approaches include:

o        Analytical appro aches, such as risk-base d testing where testing i s directed to areas of gre atest risk

o    Model-based approaches, su ch as stochastic testing using statistical information about failure rate s (such as re liability grow th models) or usage (such as operat ional profile s)

o    Met hodical appr oaches, suc h as failure- b ased (includ ing error guessing and fault attacks), exp erience-base d, checklist-based, and quality chara cteristic-bas ed

o        Proc ess- or standard-compli ant approac hes, such as those specified by indus try-specific standards or the various agile methodologies

o        Dynamic and heuristic approaches, such as explorat ory testing where testing is more rea ctive to eve nts than pre-planned, an d where execution and evaluation ar e concurrent tasks

o        Consultative app roaches, such as those in which test coverage is driven primarily by the a dvice and guidance of technology and/or business domain experts outside the test team

o        Regression-averse approach es, such as those that in clude reuse of existing test material, extensive automation of functional regres sion tests, a nd standard test suites

Different approaches may be co mbined, for e xample, a risk-based dynamic appro ach.

5.3   Test Progress Monitoring and Control

Terms

Defect density, failure rate, test c ontrol, test monitoring, t est summary report

5.3.1           Test Progress Monitoring ( K1)

The pur pose of test monitoring is to provide feedback an d visibility about test acti vities. Inform ation to be mo nitored may be collecte d manually or automatica lly and may be used to measure exit criteria, such as cov erage. Metric s may also be used to assess progr ess against the planned schedul e and budget. Common test metrics include:

o        Perc entage of work done in test case pre paration (or percentage of planned test cases prep ared)

o Perc entage of work done in test environ ment prepara tion

o Test case execution (e.g., nu mber of test cases run/n ot run, and test cases p assed/failed)

o Defe ct informati on (e.g., defe ct density, d efects foun d and fixed, failure rate, a nd re-test r esults) o Test coverage of requiremen ts, risks or code

o Subjective confi dence of testers in the product o Date s of test milestones

o        Testing costs, including the c ost compar ed to the ben efit of finding the next d efect or to run the next test

5.3.2           Test Rep orting (K2)

Test reporting is concerned with summarizin g information about the t esting endea vor, including: o Wha t happened during a period of testin g, such as d ates when e xit criteria we re met

o        Analyzed information and m etrics to sup port recommendations a nd decisions about future actio ns, such as an assessm ent of defects remaining, the econo mic benefit of continued testing, outstanding risks, and the level of confidence in the teste d software

The outline of a test summary report is given in ‘Standard for Software Test Documentation’ (IEEE Std 829-1998).

Metrics should be co llected durin g and at the end of a tes t level in ord er to assess : o The adequacy of the test objectives for t hat test level

o The adequacy of the test ap proaches tak en

o The effectiveness of the testing with resp ect to the ob jectives

5.3.3           Test Con trol (K2)

Test con trol describe s any guidin g or corrective actions t aken as a re sult of inform ation and m etrics gathered and report ed. Actions m ay cover an y test activity and may affect any oth er software life cycle activity or task.

Example s of test con trol actions include:

o Making decisions based on information f rom test mo nitoring

o Re- prioritizing tests when an identified ris k occurs (e.g., software delivered late)

o Changing the te st schedule due to availability or una vailability of a test environment

o        Setting an entry criterion requiring fixes to have been re-tested (confirmation tested) by a dev eloper before accepting them into a b uild

 

5.4   Configuration Management (K 2)

Terms

Configuration manag ement, version control

Background

The pur pose of configuration management is to establish and maintain the integrity of the pro ducts (compon ents, data and documen tation) of th e software o r system thro ugh the project and pro duct life cycle .

For testing, configur ation management may involve ensuring the following:

o        All items of testw are are iden tified, versio n controlled, tracked for changes, related to eac h other and related to de velopment items (test o bjects) so th at traceabilit y can be maintained thro ughout the t est process

o        All i dentified documents and software ite ms are referenced unambiguously in test doc umentation

For the tester, config uration management helps to unique ly identify (a nd to reprod uce) the tested item, test documents , the tests and the test harness(es).

During t est planning, the configuration mana gement procedures and infrastructur e (tools) should be chosen, documented and implem ented.

 

5.5   Risk and Testing (K2)

Terms

Product risk, project risk, risk, risk-based testing

Background

Risk can be defined as the chan ce of an eve nt, hazard, t hreat or situa tion occurri ng and resul ting in undesira ble consequ ences or a potential pro blem. The level of risk will be determ ined by the likelihood of an adverse event ha ppening an d the impact (the harm re sulting from that event).

5.5.1           Project Risks (K2)

Project risks are the risks that surround the project’s capa bility to deli ver its objectives, such as:

o        Org anizational f actors:

  • Skill, training and sta ff shortages
  • Personnel issues
  • Political issues, such as:

ƒ Problems with testers communicatin g their needs and test re sults

ƒ Failure by the team to follow up on in formation found in testing and revie s (e.g., not im proving deve lopment an d testing pra ctices)

  • Improper attitude to ard or expectations of t esting (e.g., not appreciating the value of finding d efects during testing)

o Tec hnical issues :

  • Problems in defining the right req uirements

• The exte nt to which requirement s cannot be met given ex isting const raints

  • Test env ironment not ready on time
  • Late data conversion , migration planning and development and testing data conversion/migration tools

• Low qua lity of the de sign, code, configuration data, test data and tests

o        Supplier issues:

  • Failure o f a third party
  • Contract ual issues

When a nalyzing, managing and mitigating th ese risks, th e test manag er is followi ng well-esta blished project management principles. The ‘Standard for Software Test Doc umentation’ (IEEE Std 8 29-1998) ou tline for test plans requi res risks and contingencies to be sta ted.

5.5.2           Product Risks (K2 )

Potential failure area s (adverse future events or hazards) in the software or syste m are known as product risks, as they are a risk to the quality of the produ ct. These in clude:

o Fail ure-prone software delive red

o The potential tha t the software/hardware could cause harm to an individual or company o Poor software ch aracteristics (e.g., functionality, reliability, usability and performance)

o        Poor data integrity and quality (e.g., data migration issues, data conversion pr oblems, data tran sport proble ms, violation of data standards)

o Software that does not perform its intended functions

Risks are used to decide where to start testin g and wher to test more; testing is used to reduce the risk of a n adverse effect occurring, or to reduce the impa ct of an adve rse effect.

Product risks are a special typeof risk to the success of a project. Te sting as a ris k-control activity provides feedback about the residual risk by measuring t he effectiven ess of critic al defect rem oval and of c ontingency p lans.

A risk-ba sed approach to testing provides pr oactive opportunities to r educe the levels of prod uct risk, starting in the in itial stages of a project. It involves the identification of produc t risks and t heir use in g uiding test planning and control, spec ification, pre paration an d execution of tests. In a risk-based a pproach the risks identified may be used to:

o Dete rmine the te st technique s to be employed

o Dete rmine the e xtent of testi ng to be carr ied out

o Prioritize testing in an attemp t to find the critical defe cts as early as possible

o        Dete rmine whet her any non-testing activities could b e employed t o reduce risk (e.g., providing training to inexp erienced des igners)

Risk-bas ed testing draws on the collective kn owledge an d insight of t he project stakeholders to determin e the risks a nd the levels of testing required to address thos e risks.

To ensure that the c hance of a product failur e is minimize d, risk man agement activities provide a disciplin ed approach to:

o Ass ess (and reassess on a regular basis) what can go wrong (risks) o Dete rmine what risks are im portant to deal with

o Implement actio ns to deal with those risks

In additi on, testing m ay support the identification of new risks, may he lp to determ ine what ris ks should be reduced, and may lower uncertain ty about risk s.

5.6   Incident Management (K3)

Terms

Incident logging, incident management, incident report

Background

Since on e of the obj ectives of tes ting is to find defects, the discrepan cies between actual and expecte d outcomes need to be l ogged as incidents. An i ncident must be investig ated and may turn out to be a defect. A ppropriate a ctions to dis pose incidents and defects should be defined. Inc idents and defe cts should b e tracked fr om discover y and classification to correction and confirmation of the solution. In order to manage all i ncidents to completion, an organization should es tablish an in cident management process and rules f or classification.

Incident s may be rai sed during development, review, testing or use o f a software product. The y may be raise d for issues in code or the working sy stem, or in any type of d ocumentatio n including requirem ents, devel opment docu ments, test documents, and user inf ormation suc h as “Help” or installation guides.

Incident reports hav e the followin g objectives:

o        Prov ide developers and other parties wit h feedback a bout the pro blem to enable identifica tion, isola tion and correction as n ecessary

o Prov ide test lead ers a mean s of tracking the quality o f the system under test and the progress

of the testing

o Prov ide ideas for test process improvem ent

Details o f the inciden t report may include:

o Date of issue, is suing organization, and a uthor

o Exp ected and ac tual results

o Identification of the test item (configuratio n item) and environment

o Software or system life cycle process in which the inc ident was o bserved

o        Description of the incident to enable reproduction an d resolution, including log s, database dumps or screen shots

o Sco pe or degree of impact on stakeholde r(s) interests o Sev erity of the i mpact on the system

o Urg ency/priority to fix

o        Status of the incident (e.g., open, deferred, duplicate, waiting to be fixed, fixe d awaiting re-test, closed)

o Conclusions, rec ommendatio ns and app ovals

o Glob al issues, s uch as other areas that m ay be affected by a change resultin g from the incident o Change history, such as the sequence of actions tak en by projec t team mem bers with respect to the incident to isolate, rep air, and confirm it as fixed

o Refe rences, including the ide ntity of the test case sp ecification that revealed the problem

The structure of an i ncident report is also cov ered in the ‘Standard for Software T est Documentation’ (IEE E Std 829-1998).