Sie sind auf Seite 1von 13

Code Coverage

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Fri, 19 Nov 2010 10:54:15 UTC
Contents
Articles
Code coverage 1
Intelligent verification 6
Functional verification 7

References
Article Sources and Contributors 10

Article Licenses
License 11
Code coverage 1

Code coverage
Code coverage is a measure used in software testing. It describes the degree to which the source code of a program
has been tested. It is a form of testing that inspects the code directly and is therefore a form of white box testing.[1] In
time, the use of code coverage has been extended to the field of digital hardware, the contemporary design
methodology of which relies on hardware description languages (HDLs).
Code coverage was among the first methods invented for systematic software testing. The first published reference
was by Miller and Maloney in Communications of the ACM in 1963.
Code coverage is one consideration in the safety certification of avionics equipment. The standard by which avionics
gear is certified by the Federal Aviation Administration (FAA) is documented in DO-178B.[2]

Coverage criteria
To measure how well the program is exercised by a test suite, one or more coverage criteria are used.

Basic coverage criteria


There are a number of coverage criteria, the main ones being:[3]
• Function coverage - Has each function (or subroutine) in the program been called?
• Statement coverage - Has each node in the program been executed?
• Decision coverage or branch coverage - Has every edge in the program been executed? For instance, have the
requirements of each branch of each control structure (such as in IF and CASE statements) been met as well as
not met?
• Condition coverage (or predicate coverage) - Has each Boolean sub-expression evaluated both to true and false?
This does not necessarily imply decision coverage.
• Condition/decision coverage - Both decision and condition coverage should be satisfied.
For example, consider the following C++ function:

int foo(int x, int y)


{
int z = 0;
if ((x>0) && (y>0)) {
z = x;
}
return z;
}

Assume this function is a part of some bigger program and this program was run with some test suite.
• If during this execution function 'foo' was called at least once, then function coverage for this function is satisfied.
• Statement coverage for this function will be satisfied if it was called e.g. as foo(1,1), as in this case, every line in
the function is executed including z = x;.
• Tests calling foo(1,1) and foo(1,0) will satisfy decision coverage, as in the first case the if condition is satisfied
and z = x; is executed, and in the second it is not.
• Condition coverage can be satisfied with tests that call foo(1,1), foo(1,0) and foo(0,0). These are necessary as in
the first two cases (x>0) evaluates to true while in the third it evaluates false. At the same time, the first case
makes (y>0) true while the second and third make it false.
In languages, like Pascal, where standard boolean operations are not short circuited, condition coverage does not
necessarily imply decision coverage. For example, consider the following fragment of code:
Code coverage 2

if a and b then

Condition coverage can be satisfied by two tests:


• a=true, b=false
• a=false, b=true
However, this set of tests does not satisfy decision coverage as in neither case will the if condition be met.
Fault injection may be necessary to ensure that all conditions and branches of exception handling code have adequate
coverage during testing.

Modified condition/decision coverage


For safety-critical applications (e.g. for avionics software) it is often required that modified condition/decision
coverage (MC/DC) is satisifed. This criteria extends condition/decision criteria with requirements that each
condition should affect the decision outcome independently. For example, consider the following code:

if (a or b) and c then

The condition/decision criteria will be satisfied by the following set of tests:


• a=true, b=true, c=true
• a=false, b=false, c=false
However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of
'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy
MC/DC:
• a=false, b=true, c=true
• a=true, b=false, c=true
• a=true, b=true, c=false

Multiple condition coverage


This criteria requires that all combinations of conditions inside each decision are tested. For example, the code
fragment from the previous section will require eight tests:
• a=false, b=false, c=false
• a=false, b=false, c=true
• a=false, b=true, c=false
• a=false, b=true, c=true
• a=true, b=false, c=false
• a=true, b=false, c=true
• a=true, b=true, c=false
• a=true, b=true, c=true
Code coverage 3

Other coverage criteria


There are further coverage criteria, which are used less often:
• Linear Code Sequence and Jump (LCSAJ) Coverage - has every LCSAJ been executed?
• JJ-Path coverage - have all jump to jump paths [4] (AKA LCSAJs) been executed?
• Path coverage - Has every possible route through a given part of the code been executed?
• Entry/exit coverage - Has every possible call and return of the function been executed?
• Loop coverage - Has every possible loop been executed zero times, once, and more than once?
Safety-critical applications are often required to demonstrate that testing achieves 100% of some form of code
coverage.
Some of the coverage criteria above are connected. For instance, path coverage implies decision, statement and
entry/exit coverage. Decision coverage implies statement coverage, because every statement is part of a branch.
Full path coverage, of the type described above, is usually impractical or impossible. Any module with a succession
of decisions in it can have up to paths within it; loop constructs can result in an infinite number of paths.
Many paths may also be infeasible, in that there is no input to the program under test that can cause that particular
path to be executed. However, a general-purpose algorithm for identifying infeasible paths has been proven to be
impossible (such an algorithm could be used to solve the halting problem).[5] Methods for practical path coverage
testing instead attempt to identify classes of code paths that differ only in the number of loop executions, and to
achieve "basis path" coverage the tester must cover all the path classes.

In practice
The target software is built with special options or libraries and/or run under a special environment such that every
function that is exercised (executed) in the program(s) is mapped back to the function points in the source code. This
process allows developers and quality assurance personnel to look for parts of a system that are rarely or never
accessed under normal conditions (error handling and the like) and helps reassure test engineers that the most
important conditions (function points) have been tested. The resulting output is then analyzed to see what areas of
code have not been exercised and the tests are updated to include these areas as necessary. Combined with other code
coverage methods, the aim is to develop a rigorous, yet manageable, set of regression tests.
In implementing code coverage policies within a software development environment one must consider the
following:
• What are coverage requirements for the end product certification and if so what level of code coverage is
required? The typical level of rigor progression is as follows: Statement, Branch/Decision, Modified
Condition/Decision Coverage(MC/DC), LCSAJ (Linear Code Sequence and Jump)
• Will code coverage be measured against tests that verify requirements levied on the system under test
(DO-178B)?
• Is the object code generated directly traceable to source code statements? Certain certifications, (ie. DO-178B
Level A) require coverage at the assembly level if this is not the case: "Then, additional verification should be
performed on the object code to establish the correctness of such generated code sequences" (DO-178B)
para-6.4.4.2.[6]
Test engineers can look at code coverage test results to help them devise test cases and input or configuration sets
that will increase the code coverage over vital functions. Two common forms of code coverage used by testers are
statement (or line) coverage and path (or edge) coverage. Line coverage reports on the execution footprint of testing
in terms of which lines of code were executed to complete the test. Edge coverage reports which branches or code
decision points were executed to complete the test. They both report a coverage metric, measured as a percentage.
The meaning of this depends on what form(s) of code coverage have been used, as 67% path coverage is more
comprehensive than 67% statement coverage.
Code coverage 4

Generally, code coverage tools and libraries exact a performance and/or memory or other resource cost which is
unacceptable to normal operations of the software. Thus, they are only used in the lab. As one might expect, there
are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage
mapping can be approximated through analysis rather than direct testing.
There are also some sorts of defects which are affected by such tools. In particular, some race conditions or similar
real time sensitive operations can be masked when run under code coverage environments; and conversely, some of
these defects may become easier to find as a result of the additional overhead of the testing code.

Software tools
For C/C++:
• Insure++
• Testwell CTC++
• Trucov
For C# .NET:
• NCover
• Pex & Mole
• PartCover
• DotCover
For Java:
• Clover
• Cobertura
• EMMA
• Jtest
• Serenity
For PHP:
• PHPUnit, also need Xdebug to make coverage reports
For Python:
• coverage [7]

Hardware tools
• Aldec
• Cadence Design Systems
• JEDA Technologies
• Mentor Graphics
• Nusym Technology
• Simucad Design Automation
• Synopsys
Code coverage 5

See also
• Mutation testing
• Software metric
• Regression testing
• Static code analysis
• White box testing
• Modified Condition/Decision Coverage
• Linear Code Sequence and Jump
• Intelligent verification
• Cyclomatic complexity

Notes
[1] Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management (http:/ / www. wiley. com/
WileyCDA/ WileyTitle/ productCd-0470042125. html). Wiley-IEEE Computer Society Press. p. 254. ISBN 0470042125. .
[2] RTCA/DO-178(b), Software Considerations in Airborne Systems and Equipment Certification, Radio Technical Commission for Aeronautics,
December 1, 1992.
[3] Glenford J. Myers (2004). The Art of Software Testing, 2nd edition. Wiley. ISBN 0471469122.
[4] M. R. Woodward, M. A. Hennell, "On the relationship between two control-flow coverage criteria: all JJ-paths and MCDC", Information and
Software Technology 48 (2006) pp. 433-440
[5] Dorf, Richard C.: Computers, Software Engineering, and Digital Devices, Chapter 12, pg. 15. CRC Press, 2006. ISBN 0849373409,
9780849373404; via Google Book Search (http:/ / books. google. com/ books?id=jykvlTCoksMC& pg=PT386& lpg=PT386&
dq="infeasible+ path"+ "halting+ problem"& source=web& ots=WUWz3qMPRv& sig=dSAjrLHBSZJcKWZfGa_IxYlfSNA& hl=en&
sa=X& oi=book_result& resnum=1& ct=result)
[6] Software Considerations in Airborne System and Equipment Certification-RTCA/DO-178B, RTCA Inc., Washington D.C., December 1992
[7] http:/ / nedbatchelder. com/ code/ coverage/

External links
• Code Coverage Analysis (http://www.bullseye.com/coverage.html) by Steve Cornett
• Introduction to Code Coverage (http://www.javaranch.com/newsletter/200401/IntroToCodeCoverage.html)
• Systematic mistake analysis of digital computer programs (http://doi.acm.org/10.1145/366246.366248)
• Comprehensive paper on Code Coverage & tools selection (http://qualinfra.blogspot.com/2010/02/
code-coverage.html) by Vijayan Reddy, Nithya Jayachandran
• Branch Coverage for Arbitrary Languages Made Easy (http://www.semdesigns.com/Company/Publications/
TestCoverage.pdf)
Intelligent verification 6

Intelligent verification
Intelligent Verification, also referred to as intelligent testbench, is a form of functional verification used to verify
that an electronic hardware design conforms to specification before device fabrication. Intelligent verification uses
information derived from the design and existing test description to automatically update the test description to target
design functionality not verified, or “covered” by the existing tests.
Intelligent verification software has this key property: given the same test environment, the software will
automatically change the tests to improve functional design coverage in response to changes in the design. Other
properties of intelligent verification may include:
• Providing direction as to why certain coverage points were not detected.
• Automatically tracking paths through design structure to coverage points, to create new tests.
• Ensuring that various aspects of the design are only verified once in the same test sets.
“Intelligent Verification” uses existing logic simulation testbenches, and automatically targets and maximizes the
following types of design coverage:
• Code coverage
• Branch coverage
• Expression coverage
• Functional coverage
• Assertion coverage

History
Achieving confidence that a design is functionally correct continues to become more difficult. To counter these
problems, in the late 1980’s fast logic simulators and specialized hardware description languages such as Verilog and
VHDL became popular. In the 1990’s, constrained random simulation methodologies emerged using hardware
verification languages such as Vera [1] and ‘e’, as well as SystemVerilog (in 2002), to further improve verification
quality and time.
Intelligent verification approaches supplement constrained random simulation methodologies, which bases test
generation on external input rather than design structure.[2] Intelligent verification is intended to automatically utilize
design knowledge during simulation, which has become increasingly important over the last decade due to increased
design size and complexity, and a separation between the engineering team that created a design and the team
verifying its correct operation.[1]
There has been substantial research into the intelligent verification area, and commercial tools that leverage this
technique are just beginning to emerge.
Intelligent verification 7

See also
Formal verification

Vendors offering Intelligent Verification


• Certess
• Mentor Graphics
• Nusym Technology

Footnotes
[1] "Leveraging Design Insight for Intelligent Verification Methodologies" (http:/ / www. embedded. com/ columns/ technicalinsights/
208401632?_requestid=155930), Embedded, June 2008.
[2] "Constrained random test struggles to live up to promises" (http:/ / www. scdsource. com/ article. php?id=137) SCDSource, March 2008.

References
• "Mentor offers 'intelligent' testbench generation tool" (http://www.scdsource.com/article.php?id=115)
SDCSource, Feb 18, 2008.
• "Nusym focuses on intelligent verification" (http://www.eetimes.com/news/design/showArticle.
jhtml?articleID=207800083) EETimes, May 2008.
• "Lifting the Fog on Intelligent Verification" (http://www.scdsource.com/article.php?id=198) SCDSource, May
2008.

Functional verification
Functional verification, in electronic design automation, is the task of verifying that the logic design conforms to
specification. In everyday terms, functional verification attempts to answer the question "Does this proposed design
do what is intended?" This is a complex task, and takes the majority of time and effort in most large electronic
system design projects.
Functional verification is very difficult - it is equivalent to program verification, and is NP-hard or even worse - and
no solution has been found that works well in all cases. However, it can be attacked by many methods. None of them
are perfect, but each can be helpful in certain circumstances:
• Logic simulation simulates the logic before it is built.
• Simulation acceleration applies special purpose hardware to the logic simulation problem.
• Emulation builds a version of system using programmable logic. This is expensive, and still much slower than the
real hardware, but orders of magnitude faster than simulation. It can be used, for example, to boot the operating
system on a processor.
• Formal verification attempts to prove mathematically that certain requirements (also expressed formally) are met,
or that certain undesired behaviors (such as deadlock) cannot occur.
• Intelligent verification uses automation to adapt the testbench to changes in the register transfer level code.
• HDL-specific versions of lint, and other heuristics, are used to find common problems.
Simulation based verification (also called 'dynamic verification') is widely used to "simulate" the design, since this
method scales up very easily. Stimulus is provided to exercise each line in the HDL code. A test-bench is built to
functionally verify the design by providing meaningful scenarios to check that given certain input, the design
performs to specification.
A simulation environment is typically composed of several types of components:
Functional verification 8

• The generator (or irritator) generates input vectors. Modern generators generate random, biased, and valid stimuli.
The randomness is important to achieve a high distribution over the huge space of the available input stimuli. To
this end, users of these generators intentionally under-specify the requirements for the generated tests. It is the
role of the generator to randomly fill this gap. This mechanism allows the generator to create inputs that reveal
bugs not being searched for directly by the user. Generators also bias the stimuli toward design corner cases to
further stress the logic. Biasing and randomness serve different goals and there are tradeoffs between them, hence
different generators have a different mix of these characteristics. Since the input for the design must be valid
(legal) and many targets (such as biasing) should be maintained, many generators use the Constraint satisfaction
problem (CSP) technique to solve the complex testing requirements. The legality of the design inputs and the
biasing arsenal are modeled. The model-based generators use this model to produce the correct stimuli for the
target design.
• The drivers translate the stimuli produced by the generator into the actual inputs for the design under verification.
Generators create inputs at a high level of abstraction, namely, as transactions or assembly language. The drivers
convert this input into actual design inputs as defined in the specification of the design's interface.
• The simulator produces the outputs of the design, based on the design’s current state (the state of the flip-flops)
and the injected inputs. The simulator has a description of the design net-list. This description is created by
synthesizing the HDL to a low gate level net-list.
• The monitor converts the state of the design and its outputs to a transaction abstraction level so it can be stored in
a 'score-boards' database to be checked later on.
• The checker validates that the contents of the 'score-boards' are legal. There are cases where the generator creates
expected results, in addition to the inputs. In these cases, the checker must validate that the actual results match
the expected ones.
• The arbitration manager manages all the above components together.
Different coverage metrics are defined to assess that the design has been adequately exercised. These include
functional coverage (has every functionality of the design been exercised?), statement coverage (has each line of
HDL been exercised?), and branch coverage (has each direction of every branch been exercised?).

Functional Verification Tools


• Avery Design Systems: SimCluster (for parallel logic simulation) and Insight (for formal verification)
• Cadence Design Systems
• EVE/ZeBu
• Mentor Graphics
• Nusym Technology
• Obsidian Software
• Synopsys
Functional verification 9

See also
• Analog Verification
• Cleanroom Software Engineering

External links
You can find related articles in
• http://www.thinkverification.com/
• CFS Vision Project: http://www.cfs-vision.com/
• An IDE for e and SystemVerilog: http://www.dvteclipse.com/
Article Sources and Contributors 10

Article Sources and Contributors


Code coverage  Source: http://en.wikipedia.org/w/index.php?oldid=396706944  Contributors: 194.237.150.xxx, Abdull, Abednigo, Ad88110, Agasta, Aislingdonnelly, Aitias, Aivosto,
AliveFreeHappy, Alksub, Allen Moore, Altenmann, Andreas Kaufmann, Andresmlinar, Anorthup, Attilios, Auteurs, Beetstra, Bingbangbong, Blacklily, Blaxthos, Conversion script,
Coveragemeter, DagErlingSmørgrav, Damian Yerrick, Derek farn, Digantorama, Dr ecksk, Ebelular, Erkan Yilmaz, Faulknerck2, FredCassidy, Gaudol, Ghettoblaster, Gibber blot, Greensburger,
HaeB, Hertzsprung, Hob Gadling, Hooperbloob, Hqb, JASpencer, JJMax, Jamelan, JavaTenor, Jdpipe, Jerryobject, Jkeen, Johannes Simon, JorisvS, Jtheires, Julias.shaw, Kdakin, Kku, Kurykh,
LDRA, LouScheffer, M4gnum0n, MER-C, Materialscientist, Mati22081979, Matt Crypto, Millerlyte87, Miracleworker5263, Mj1000, MrOllie, Nat hillary, Nigelj, Nixeagle, Ntalamai,
Parasoft-pl, Penumbra2000, Phatom87, Pinecar, Ptrb, Quinntaylor, Quux, RedWolf, Roadbiker53, Robert Merkel, Rwwww, Sebastian.Dietrich, SimonKagstrom, Smharr4, Snoyes, Suruena,
Technoparkcorp, Test-tools, Tiagofassoni, TutterMouse, U2perkunas, Veralift, Walter Görlitz, WimdeValk, Witten rules, Wlievens, Wmwmurray, X746e, 177 anonymous edits

Intelligent verification  Source: http://en.wikipedia.org/w/index.php?oldid=301647278  Contributors: Agasta, Altenmann, Greenrd, Kswenson77, Malcolma, Malcolmxl5, Oscarthecat,
Ricknordin, T@nn, 1 anonymous edits

Functional verification  Source: http://en.wikipedia.org/w/index.php?oldid=388455518  Contributors: Agasta, Altenmann, Amelio Vázquez, Bing11, Deadlifeboat, Jodiegg, LouScheffer, Luna
Santin, Malcolmxl5, MrKIA11, Mukis, R'n'B, Ricknordin, Struway, TeamX, WimdeValk, 20 anonymous edits
Image Sources, Licenses and Contributors 11

License
Creative Commons Attribution-Share Alike 3.0 Unported
http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/

Das könnte Ihnen auch gefallen