ISTQB Syllabus

Back to the table of content

Static Techniques (K2 )

3.1   Static Techniques and the Test Process (K2)


Dynamic testing, static testing


Unlike dynamic testing, which requires the execution of software, static testing techniques rely on the manual examination (reviews ) and automated analysis (static analysis) of the code or other project documentation without the execution of the code.

Reviews are a way of testing software work products (including code) and can be performed ell before dynamic test execution. Defects detected during reviews early in the life cycle (e.g., defects found in requirements) are often much cheaper to remove than those detected by running tests on the executing code.

A review could be done entirely as a manual activity, but there is also tool support. The main manual activity is to examine a work product and make comments about it. Any software work product can be reviewed, including requirements specifications, design specifications, code, test plans, test specifications, test cases, test scripts, user guides or web pages.

Benefits of reviews include early defect detection and correction, development productivity improvements, reduced development timescales, reduced testing cost and time, lifetime cost reductions, fewer defects and improved communication. Reviews can find omissions, for example, in requirements, which are unlikely to be found in dynamic testing.

Reviews, static analysis and dynamic testing have the same objective – identifying defects. They are complementary; the different techniques can find different types of defects effectively and efficiently. Compared to dynamic testing, static techniques find causes of failures (defects) rather than the failures themselves.

Typical defects that are easier to find in reviews than in dynamic testing include: deviations from standards, requirement defects, design defects, insufficient maintainability and incorrect interface specifications.


3.2   Review Process (K2)


Entry criteria, formal review, informal review, inspection, metric, moderator, peer review, reviewer, scribe, technical review, walkthrough


The diffe rent types of reviews vary from informal, characterized by no written instructions for reviewers, to systematic, characterized by te am participation, docume nted results of the review, and docume nted proced ures for cond ucting the r eview. The f ormality of a review process is relate d to factors s uch as the maturity of the developm ent process, any legal or regulatory requirements or the need for an audit tra il.

The way a review is carried out d epends on the agreed objectives of the review (e.g., find def ects, gain und erstanding, educate testers and new team mem bers, or disc ussion and d ecision by consensus).

3.2.1           Activities of a Formal Review (K1)

A typical formal review has the following main activities:

  1. Planning
  • Defining the review criteria
  • Selecting th e personnel
  • Allocating roles
  • Defining the entry and e xit criteria for more formal review typ es (e.g., inspections)
  • Selecting wh ich parts of documents to review
  • Checking entry criteria (for more form al review types)
  1. Kick-off
  • Distributing documents
  • Explaining th e objectives , process an d documents to the participants
  1. Individual prepar ation
  • Preparing for the review meeting by reviewing the document(s)
  • Noting potential defects, questions and comments
  1. Examination/evaluation/recording of results (review meeting)
  • Discussing or logging, with documented results or minutes (fo r more formal review types)
  • Noting defects, making recommendations regarding handling the defects, making decisions about the defects
  • Examining/evaluating and recording issues during any physical meetings or tracking any group electronic communications
  1. Rework
  • Fixing defects found (typically done b y the author )
  • Recording updated status of defects (in formal reviews)
  1. Follow-up
  • Checking th at defects ha ve been ad dressed
  • Gathering metrics
  • Checking on exit criteria (for more formal review types)

3.2.2           Roles and Responsibilities (K1)

A typical formal revie w will includ e the roles b elow:

Manager: decide s on the exe cution of re views, alloca tes time in p roject sched ules and determines if the review objectives have been met.

o Moderator: the person who leads the review of the do cument or set of documents, includi ng planning the review, running the meeting, and following-up after the meeting. If necessary, the moderator may mediate betw een the various points o f view and is often the person upon whom the success of the review rests.

o Author: the writer or person with chief res ponsibility f or the docum ent(s) to be reviewed.

o Reviewers: individuals with a specific technical or bu siness backg round (also called check ers or inspectors) who, after the necessary pre paration, identify and des cribe finding s (e.g., defe cts) in the product unde r review. Re viewers should be chos en to represent different perspectives and role s in the review process, and should take part in any review meetings.

o Scribe (or recorder): docume nts all the issues, proble ms and open points that were identified duri ng the meeting.

Looking at software products or r elated work products from different p erspectives and using checklis ts can make reviews mo re effective a nd efficient. For exampl e, a checklist based on various perspec tives such a s user, maint ainer, tester or operation s, or a checklist of typic al requirements problems may help to uncover pr eviously un detected issu es.

3.2.3           Types of Reviews (K2)

A single software pr oduct or related work product may be the subject of more tha n one review. If more tha n one type of review is used, the ord er may vary. For example, an inform al review m ay be carried o ut before a technical rev iew, or an in spection ma y be carried out on a req uirements specifica tion before a walkthroug h with customers. The m ain charact eristics, optio ns and purp oses of comm on review types are:

Informal Review

o No formal process

o May take the for m of pair pro gramming or a technical lead reviewing designs and code o Results may be documented

o Varies in usefulness depending on the reviewers

o Main purpose: i nexpensive way to get s ome benefit


o Meeting led by author

o May take the form of scenarios, dry runs, peer group participation

o Open-ended ses sions

• Optional pre-meeting pre paration of reviewers

• Optional preparation of a review repo rt including list of finding s

o Optional scribe (who is not the author)

o May vary in practice from quite informal to very formal

o Mai n purposes: learning, gaining understanding, finding defects

Technical Review

o Documented, defined defect -detection process that i ncludes peers and technical experts with opti onal management participation

o May be perform ed as a peer review with out manage ment participation o Ideally led by trained modera tor (not the author)

o Pre-meeting preparation by reviewers o Optional use of c hecklists

o Prep aration of a review report which incl udes the list of findings, the verdict whether the software product meets its re quirements and, where appropriate, recommendations relate d to findings

o May vary in practice from quite informal to very forma l

o Main purposes: discussing, making decisions, evalu ating alternatives, finding defects, solving technical proble ms and checking conform ance to spe cifications, plans, regulations, and standards


o Led by trained moderator (not the author)

o Usually conduct ed as a peer examination

o Defined roles

o Includes metrics gathering

o Formal process based on rules and checklists

o Specified entry a nd exit criteria for acceptance of the software product

o Pre-meeting preparation

o Inspection report including list of findings

o Formal follow-up process (with optional process improvement components)

o Optional reader

o Main purpose: finding defects

Walkthroughs, technical reviews and inspections can be performed within a peer group,  i.e., colleagues at the same organizational level. This type of review is called a “peer review”.

3.2.4           Success Factors for Reviews (K2)

Success factors for reviews include:

o Each review has clear predefined objectives

o The right people for the review objectives are involved

o Testers are valued reviewers who contrib ute to the review and also learn about the product whic h enables th em to prepa re tests earlier

o Defects found are welcomed and expressed objectively

o People issues a nd psycholo gical aspects are dealt with (e.g., ma king it a positive experie nce for the author)

o The review is conducted in an atmosphere of trust; th e outcome will not be used for the evaluation of the participants

o Review techniques are appli ed that are suitable to achieve the ob jectives and to the type and level of software work produ cts and reviewers

o Checklists or rol es are used if appropriate to increase effectiveness of defect identification

o Training is given in review techniques, es pecially the more formal techniques such as inspection

o Management supports a good review pro cess (e.g., b y incorporating adequate time for review activ ities in proje ct schedule s)

o The re is an emphasis on learning and pr ocess improvement


3.3   Static Analysis by Tools (K2)


Compiler, complexity , control flo w, data flow, static analy sis


The obje ctive of static analysis is to find defects in software source co de and soft ware models. Static an alysis is performed with out actually executing th e software being examined by the to ol; dynamic testing doe s execute th e software c ode. Static analysis can locate defects that are h ard to find in d ynamic testi ng. As with r eviews, static analysis fin ds defects rather than f ailures. Stati c analysis tools analyz e program code (e.g., co ntrol flow an d data flow), as well as generated o utput such as HTML and XML.

The value of static analysis is:

o Early detection o f defects prior to test ex ecution

o Early warning ab out suspicious aspects of the code o r design by the calculati on of metrics , such as a high compl exity measure

o Identification of defects not easily found by dynamic testing

o Detecting dependencies and inconsistencies in software models such as links

o Impr oved mainta inability of code and des ign

o Prevention of defects, if lessons are learned in development

Typical defects disco vered by st atic analysis tools include :

o Referencing a variable with an undefined value

o Inconsistent interfaces betwe en modules and components o Variables that are not used or are improperly declared

o Unreachable (dead) code

o Missing and erroneous logic (potentially infinite loops)

o Overly complicated constructs

o Programming standards violations

o Security vulnerabilities

o Syntax violation of code and software models

Static analysis tools are typically used by developers (checking against predefined rules or programming standards) before and during component and integration testing or when checking-in code to configuration management tools, and by designers during software modeling. Static analysis tools may produce a large number of warning messages, which need to be well-man aged to allow the most effective use of the tool.

Compilers may offer some support for static analysis, including the calculation of metrics.