ISTQB Syllabus

Back to the table of content

Fundamentals of Testing (K2 )

1.1   Why is Testing Necessary (K 2)

Terms

Bug, defect, error, failure, fault, m istake, quality, risk

1.1.1 Software Systems Context (K1)

Software systems are an integral part of life, from busine ss applications (e.g., ban king) to consumer products (e.g., cars). Most people have had an experienc e with software that did not work as expecte d. Software that does not work corre ctly can lead to many pro blems, including loss of money, time or busin ess reputation, and could even cause injury or death.

1.1.2 Causes of Software Defects (K2)

A huma n being can make an err or (mistake), which prod ces a defect (fault, bug) in the program code, or in a docum ent. If a defect in code is executed, th e system m ay fail to do what it should do (or do so mething it shouldn’t), causing a failure. Defects in software, systems or documents m ay result in failures, but not all defec ts do so.

Defects occur because human beings are fallible and bec ause there is time pressure, complex code, complexity of infrastructure , changing technologies, and/or man y system int eractions.

Failures can be caus ed by enviro nmental co nditions as w ell. For example, radiation, magnetism, electronic fields, and pollution can cause faults in firmware or influenc e the execution of softw are by changin g the hardwa re conditions.

1.1.3 Role of Testing in Software Development, Maintenance and Operations (K2)

Rigorous testing of systems and documentation can help to reduce th e risk of problems occurring during operation and contribute to the quality of the software system, if the defect s found are corrected before the system is re leased for operational u se.

Software testing ma y also be req uired to me et contractua l or legal re quirements, or industry-specific standard s.

1.1.4 Testing and Quality (K2)

With the help of testing, it is possible to measure the quality of software in terms o f defects fou nd, for both functional a nd non-functional softwar e requirements and characteristics (e.g., reliabili ty, usability, efficiency, maintainability and portability). For more information on non-functional tes ting see Chapter 2; for more information on software characte ristics see ‘ Software En gineering – Software Product Qu ality’ (ISO 9126).

Testing can give con fidence in th e quality of the software if it finds few or no defects. A properly designe d test that pa sses reduce s the overall level of risk in a system. When testing does find defects, the quality o f the softwa re system in creases when those defects are fixed .

Lessons should be learned from previous pro jects. By understanding the root causes of defects found in other projects, processes can be im proved, which in turn sho uld prevent those defects from reoccurring and, as a consequen ce, improve the quality of future systems. This is an aspect of quality assurance.

Testing should be integrated as one of the qu ality assurance activitie s (i.e., along side development standard s, training and defect an alysis).

1.1.5 How Much Testing is Enough? (K2)

Deciding how much testing is enough should take accoun t of the level of risk, including technical, safety, and business risks, and project constr aints such as time and b udget. Risk is discusse d further in Chapter 5.

Testing should provide sufficient information to stakeholders to make informed decisions abou t the release of the softwa re or system being tested, for the next development step or h andover to customers.

 

1.2   What is Testing? (K2)

Terms

Debugging, requirement, review, test case, testing, test objective

Background

A common perceptio n of testing is that it only consists of running test s, i.e., executing the soft ware. This is part of testing , but not all of the testin g activities.

Test activities exist b efore and af ter test exec ution. These activities i nclude plann ing and control, choosin g test conditions, designing and executing test cases, checkin g results, ev aluating exi t criteria, reporting on the testing p rocess and system und r test, and finalizing or completing closure activities after a test phase has b een completed. Testing also includes reviewing documents (including source co de) and con ducting static analysis.

Both dyn amic testing and static testing can be used as a means for achieving similar objective s, and will provide information that can be used to improve both the system being tested and the develop ment and tes ting process es.

Testing can have the following o bjectives:

o Finding defects

o Gaining confiden ce about the level of quality

o Providing information for decision-making

o Preventing defects

The thought process and activitie s involved i n designing tests early in the life cycle (verifying the test basis via test design) can help to prevent defects fro m being intro duced into c ode. Review s of docume nts (e.g., req uirements) and the iden tification and resolution o f issues als o help to prevent defects appearing in the code.

Different viewpoints in testing tak e different objectives into account. For example, in developm ent testing ( e.g., compo nent, integration and sys tem testing), the main ob jective may be to cause as many failures as pos sible so that defects in t he software are identified and can be fixed. In accepta nce testing, t he main objective may b e to confirm that the system works as expected, to gain confidence that it has met th e requireme nts. In some cases the main objectiv e of testing may be to as sess the qua lity of the so ftware (with no intention of fixing defects), to give information to stakeholders of the risk of releasing the syste m at a given time. Maintenance testing often incl udes testing t hat no new d efects have been introd uced during development of the chan ges. During operational testing, the main objective may be to assess system characteristics s uch as reliability or availability.

Debugging and testing are differ ent. Dynami c testing can show failure s that are caused by defects. Debugging is the de velopment activity that fi nds, analyzes and remov es the caus e of the failure. Subsequ ent re-testin g by a tester ensures th at the fix doe s indeed re solve the failure. The responsibility for the se activities is usually te ters test an d developers debug.

The process of testing and the testing activities are explained in Section 1.4.

 

1.3   Seven Testing Principles (K2)

Terms

Exhaustive testing

Principles

A number of testing principles ha ve been sug gested over the past 40 years and offer general guidelin es common f or all testing .

Principle 1 – Testing shows presence of defects

Testing can show th at defects are present, b ut cannot pr ove that there are no defects. Testing reduces the probability of undiscovered defe cts remainin g in the softw are but, eve n if no defe cts are found, it is not a pro of of correctn ess.

Principle 2 – Exhaustive testing is impossible

Testing everything ( all combinations of inputs and preconditions) is n ot feasible e xcept for trivial cases. I nstead of ex haustive testing, risk analysis and priorities should be used to focus testing efforts.

Principle 3 – Early testing

To find d efects early, testing acti vities shall be started as early as pos sible in the software or s ystem develop ment life cycle, and shall be focused on defined o bjectives.

Principle 4 – Defect clustering

Testing effort shall be focused proportionally to the expec ted and later observed d efect density of modules. A small number of mod ules usually contains m ost of the de fects discov ered during p re-release testing, or is responsible for most of the operation al failures.

Principle 5 – Pesticide paradox

If the sa me tests are repeated ov er and over again, eventually the sa me set of tes t cases will no longer fi nd any new defects. To overcome thi s “pesticide paradox”, test cases nee d to be regu larly reviewe d and revise d, and new a nd different tests need t o be written to exercise d ifferent parts of the softw are or syste m to find potentially more defects.

Principle 6 – Testing is context dependen 

Testing is done differently in diffe rent contexts. For example, safety-critical software is tested differently from an e-commerce s ite.

Principle 7 – Absence-of-errors fallacy

Finding and fixing de fects does n ot help if the system built is unusable and does not fulfill the users’ needs a nd expectations.

 

1.4   Fundamental Test Process (K 1)

Terms

Confirm ation testing, re-testing, exit criteria, incident, regr ession testi ng, test basi s, test condition, test cov erage, test data, test execution, test log, test plan, test proced ure, test policy, test suit e, test summary report, testware

Background

The mos t visible part of testing is test executi on. But to b e effective a nd efficient, t est plans should also include time to be spent on planning the tests, designing test cases, preparing for execution and evaluating results.

The fund amental test process consists of the following main activities:

o Test planning and control

o Test analysis and design

o Test implementa tion and execution

o Evaluating exit criteria and reporting

o Test closure activities

Although logically sequential, the activities in the process may overlap or take place concurre ntly. Tailoring these main activities within the context of the system and th e project is u sually requir ed.

 1.4.1 Test Planning and Control (K1)

Test planning is the activity of defining the objectives of testing and the specification of test activities in order to meet the objectives and mission.

Test control is the on going activit y of comparing actual progress agai nst the plan, and reportin g the status, i ncluding deviations from the plan. It i nvolves taking actions necessary to meet the mission and obje ctives of the project. In o rder to control testing, th e testing activities should be monitored througho ut the project. Test planning takes in to account the feedback from monitoring and control activities.

Test planning and control tasks are defined in Chapter 5 of this syllabus.

1.4.2 Test Analysis and Design ( K1)

Test ana lysis and de sign is the activity during which gene ral testing objectives are transformed into tangible test conditions and test cases.

The test analysis an d design activity has the following ma jor tasks:

o Reviewing the te st basis (su ch as require ments, soft ware integrit y level1 (risk level), risk analysis reports, architecture , design, int erface specifications)

o Evaluating testa bility of the t est basis an d test object s

o Identifying and prioritizing te st conditions based on a nalysis of te t items, the specification , beh avior and structure of the software

o Designing and prioritizing hig h level test cases

o Identifying neces sary test data to support the test co nditions and test cases

o Designing the test environm ent setup an d identifying any require d infrastructu re and tools

o Cre ating bi-directional tracea bility betwee n test basis and test case

1 The deg ree to which so ftware complies or must comply with a set of stakeholder-sele cted software a nd/or software-based system ch aracteristics (e.g., software co mplexity, risk as sessment, safe ty level, security level, desired performance, reliability, or cost) which a re defined to re flect the importance of the software to its stakeholders.

1.4.3 Test Implementation and Execution (K1)

Test imp lementation and execution is the activity where t est procedures or scripts are specified by combini ng the test c ases in a pa rticular order and includi ng any other information needed for test execution, the enviro nment is set up and the tests are run.

Test imp lementation and execution has the f ollowing major tasks:

o Finalizing, implementing and prioritizing test cases (including the identification of test data)

o Developing and prioritizing te st procedur es, creating test data an d, optionally, preparing t est harn esses and w riting autom ated test sc ripts

o Cre ating test suites from the test procedu res for efficient test exe cution

o Verifying that the test enviro nment has b een set up correctly

o Verifying and updating bi-dir ectional trac eability between the test basis and test cases

o    Executing test p rocedures either manually or by usin g test execu tion tools, according to th e planned sequen ce

o Logging the outc ome of test execution an d recording the identities and versions of the software

under test, test tools and testware

o Com paring actual results with expected results

o Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g., a defect in the c ode, in specified test dat a, in the test document, or a mistake in the way t he test was executed)

o Repeating test activities as a result of ac tion taken for each discr epancy, for e xample, re-exec ution of a test that previ ously failed in order to co nfirm a fix ( confirmation testing), exe cution of a corrected test and/or ex ecution of te sts in order to ensure tha t defects have not been introduced in un changed areas of the sof tware or that defect fixing did not un cover other defects (regression testing).

1.4.4 Evaluating Exit Criteria and Reporting (K1)

Evaluati ng exit criteria is the acti vity where te st execution is assessed against the defined objective s. This sho uld be done f or each test level (see Section 2.2).

Evaluati ng exit criteria has the following majo r tasks:

o Checking test lo gs against th e exit criteri a specified i n test planni ng

o Ass essing if more tests are n eeded or if the exit crite ria specified should be c hanged

o Writing a test summary repo rt for stakeh olders

1.4.5 Test Closure Activities (K1)

Test clo sure activities collect data from completed test activities to consolidate experience, testware, facts and n umbers. Te st closure ac tivities occu r at project m ilestones su ch as when a software system is r eleased, a te st project is completed (or cancelled), a milestone has been achieve d, or a maintenance rele ase has bee n completed.

Test closure activities include the following major tasks:

o Checking which planned deliverables ha ve been deli vered

o Clos ing incident reports or ra ising chang e records for any that re main open

o Documenting th e acceptanc e of the syst em

o Finalizing and archiving test ware, the test environment and the test infrastruc ture for later reuse

o Handing over th e testware to the mainten ance organization

o Analyzing lesson s learned to determine c hanges needed for future releases a nd projects

o Using the information gathered to improve test maturity

 

1.5   The Psychology of Testing (K2)

Terms

Error guessing, inde pendence

Background

The mindset to be u sed while tes ting and reviewing is different from t hat used while developing software. With the ri ght mindset developers a re able to test their own code, but se paration of this responsibility to a te ster is typically done to help focus effort and provide additiona l benefits, such as an indep endent view by trained a nd professi onal testing resources. In dependent testing may be carried o ut at any lev el of testing.

A certain degree of i ndependence (avoiding the author bias) often ma kes the tester more effective at findin g defects an d failures. Independence is not, how ever, a replacement for f amiliarity, and develop ers can efficiently find m any defects in their own code. Sever al levels of in dependenc e can be defin ed as shown here from low to high:

o Tests designed by the person(s) who wr ote the software under test (low level of independence) o Tests designed by another person(s) (e.g ., from the d evelopment team)

o Tests designed by a person(s) from a dif ferent organizational group (e.g., an independent test team ) or test spe cialists (e.g ., usability or performance test speci alists)

o Tests designed by a person(s) from a dif ferent organization or company (i.e., outsourcing or certification by an external b ody)

People a nd projects are driven by objectives. People ten d to align the ir plans with the objectives set by mana gement and other stakeholders, for example, to find defects or to confirm that softwar e meets it s objectives. Therefore, it is important to clearly state the obje ctives of testing.

Identifyi ng failures d uring testing may be per ceived as criticism again st the produ ct and again st the author. As a result, t esting is ofte n seen as a destructive activity, eve n though it is very constr uctive in the m anagement of product ris ks. Looking for failures in a system requires curi osity, profes sional pessimis m, a critical eye, attentio n to detail, good commu nication wit h development peers, and experien ce on which to base err or guessing.

If errors, defects or f ailures are communicated in a constructive way, bad feelings between th e testers a nd the analy sts, designe rs and developers can be avoided. T his applies to defects found during re views as w ell as in testi ng.

 

The tester and test l eader need good interpersonal skills to communi cate factual information a bout defects, progress and risks in a c onstructive way. For the author of the software or document, defect information ca n help them improve the ir skills. Defects found and fixed during testing will save time and money later, and r educe risks.

Commu nication prob lems may occur, particularly if testers are seen only as messengers of unwanted news abo ut defects. However, the re are sever al ways to im prove comm unication a nd relationships between testers and others:

o Start with collab oration rather than battles – remind everyone of the common goal of bette r quality systems

o Com municate fin dings on th e product in a neutral, fa ct-focused way without criticizing the person who crea ted it, for example, write objective an d factual inc ident report s and review findings

o Try t o understand how the other person feels and why they react as they do

o Confirm that the other perso n has unders tood what you have said and vice ve rsa

 

1.6   Code of Ethics

Involvem ent in software testing e nables individuals to learn confidential and privil eged informa tion. A code of ethics is necessary, among other rea sons to ens ure that the information i s not put to inapprop riate use. Recognizing the ACM and IEEE code of ethics for engineers, the ISTQB states the following code of ethics:

PUBLIC - Certified software test ers shall act consistently with the pub lic interest

CLIENT AND EMPL OYER - Cert ified software testers sh all act in a manner that is in the best interests of their c lient and em ployer, cons istent with t he public int erest

PRODUCT - Certified software t esters shall e nsure that t he deliverables they prov ide (on the products and systems they te st) meet the highest prof essional sta ndards possible

JUDGM ENT- Certifie d software testers shall maintain int egrity and in dependence in their prof essional judgmen t

MANAGEMENT - Ce rtified software test man agers and le aders shall subscribe to and promote an ethical approach to the manage ent of softw are testing

PROFE SSION - Certified software testers shall advance the integrity and reputation of the pro fession consistent with the public interest

COLLEA GUES - Ce rtified softwa re testers sh all be fair to and supportive of their c olleagues, a nd promote cooperation with software developers

SELF - Certified software testers shall participate in lifelo ng learning regarding the practice of their professi on and shall promote an ethical appr oach to the p ractice of the profession.