Beruflich Dokumente
Kultur Dokumente
Feburary 2006
Introduction
RTL verification dominates most chip development schedules, with electronics firms pouring up to 70% of
their engineering resources into the task. Unfortunately, this high level of investment hasn't always paid off
in terms of first-silicon success. According to a recent survey1, 71% of all IC designs fail on first silicon
and require at least one re-spin. 60% of these faulty designs have functional errors that could have been
detected with more thorough RTL verification.
Why aren't more designs successful on first silicon given the high level of investment? The answer lies in
the ever-increasing growth of design complexity and the corresponding exponential increase in verification
complexity. The experience of microprocessor development at Sun Microsystems2 illustrates this trend.
During a four-year period, design complexity (measured by number of lines of RTL code) for its SPARC
microprocessors increased by 6x, roughly in line with Moore's Law. However, verification complexity
(measured by number of logic simulation cycles) grew by 100x during the same period.
Keeping up with this staggering growth in verification complexity requires new approaches. This paper
describes how the VCS RTL verification solution from Synopsys runs up to 5x faster than traditional
approaches, enabling fundamental improvements in verification efficiency and thoroughness even for the
most complex system-on-chip (SoC) projects. The result is a more predictable verification progress,
converging more quickly toward coverage goals and taping out with a much greater chance of first-silicon
success.
Compiler Optimizations
The biggest disadvantage of having multiple verification tools bolted together is that each tool compiles
or interprets its own language independently. For example, an independent code coverage tool and a
simulator analyze the RTL design separately, reading the code and building their own independent models.
This is inefficient during the analysis/compilation phase and can lead to differences of interpretation for
certain RTL constructs in the design.
The problem is more serious when different pieces of code are analyzed with different tools. For example,
a testbench automation tool analyzes and compiles the testbench code while the simulator compiles the
design itself. This independent analysis means that a single compiler cannot perform optimizations across
the design and testbench. Many decades of research have yielded highly sophisticated forms of compiler
optimization, but these require visibility into the complete body of code for maximum effect.
Synopsys has leveraged this vast compiler experience to apply a wide range of optimizations to RTL code
compiled by VCS. These optimizations speed up RTL simulation and save memory for the design model.
However, these optimizations cannot be extended to testbench, assertion or coverage code as long as
these portions of the verification environment are analyzed by separate tools.
The inclusion of built-in testbench, assertion and coverage capabilities within VCS allows its single NTB
compiler to apply code optimizations to both the design and verification components of the simulation
environment. The same level of performance cannot be achieved by separate compilations, for example,
with a bolt-on verification tool that compiles testbench or assertion code but not the design.
Finally, the NTB integration of testbench, assertion and coverage capabilities within VCS eliminates the
overhead of communication between the simulator and bolt-on tools. This inter-tool communication
traditionally has been accomplished by the Programming Language Interface (PLI) in Verilog designs and
by similar vendor-proprietary approaches for VHDL. Since VCS includes all components of the verification
environment, they share the same model and so data value changes are simply written and read from the
common model. Elimination of PLI helps speed up simulation, although experience has shown that VCS'
optimizations are a much larger factor.
Synopsys measurements on a wide range of customer chips have shown that the combined effect of
optimizations and communication within VCS can typically improve simulation speed up to 5x over bolt-on
tools. As shown in Figure 1, the amount of speedup depends upon the percentage of simulation time
spent in the design versus the testbench. Because of the VCS optimizations that eliminate unnecessary
signals, usually the design and built-in testbench together run faster than the design itself does when
hobbled by a bolt-on testbench.
100%
80%
60%
40%
20%
0%
Speedup
6.6X
Testbench
4.4X
2.7X
Design
1.4X
Figure 1: Built-in testbench support can speed up the entire simulation, including the design.
Code Coverage
Tool
RTL Design
Data 1
Testbench
Data 2
Data 1
Data 2
Drivers
Data 3
Data 4
Data 7
Monitors
Data 5
PLI
Data 9
Data 6
Data n
Data n-1
Data n
Simulator
RTL Design
Testbench
Drivers
Data 1
Data 2
Data 3
Code
Coverage
Data 4
Data 5
Data 6
Monitors
Assertions
Data n-1
Data n
VCS
VCS is very efficient in its storage allocation for the design and verification data, another benefit of its
sophisticated compiler technology. Synopsys measurements and customer experiences have shown that
VCS can reduce total memory consumption by up to 3x over the combination of a traditional simulator and
bolt-on verification tools.
VCS also has significant ease-of-use advantages. Learning how to use multiple verification tools is always
a challenge, made especially difficult because each tool has its own compile-time switches, run-time
options, command set and user interface. Even after the chip verification team does all the work to
engineers are still faced with learning each tool and its user interface.
In addition to the performance gain and memory reduction, building verification functionality into a
simulator also provides much greater ease of use. In the case of VCS, options to control the design,
testbench, assertions and coverage can all be specified at the same time in a common format. Simulation
results can be viewed using a common interface-the Discovery Visualization Environment (DVE)-minimizing
the learning process for design and verification engineers, and allowing them to become productive in
much less time.
Naturally, a set of verification capabilities developed by a single vendor will be better integrated, better
documented, and architected under the guidance of a shared vision and a unified, coherent methodology.
In addition, dealing with a single verification vendor makes it easier to evolve the verification environment,
report and track any problems encountered, and obtain support when needed.
100%
Percent of
Functionality
Tested and
Design Quality
ConstrainedRandom
Approach
Directed
Approach
Time Savings
Time
Figure 4: Constrained-random verification is much more efficient than writing directed tests.
Verification engineers strive to complete testing of all functionality in a chip design before tapeout. What
actually happens on many projects is that a tapeout deadline is enforced even when verification is not
complete, leading to a low-quality design with many problems discovered in silicon. The greater rate of
verification convergence provided by the constrained-random approach makes verification completion
much more likely prior to tapeout and yields a much higher-quality result whenever tapeout occurs.
Assertions should at least print out an error message if they are violated during simulation, but they can do
much more. For example, a testbench can report a test failure whenever an assertion fails, even if the
results-checking passes. In other cases, the testbench might want to detect and react to assertion failures
in specific ways, such as dumping out contents of registers that can help with debugging the assertion
violations.
These techniques require close interaction between the testbench and the assertions, which adds
considerable overhead if the design, testbench and assertions are running in different bolt-on tools. Since
all three components can run within VCS, all interaction remains within the simulation kernel and so there's
no performance impact. Thus, VCS supports fast, real-time reactivity to assertion violations.
Finally, VCS provides several features to help debug assertions. With the visualization capabilities of DVE,
engineers can easily see why assertion violations occur. This makes it possible to distinguish between
design bugs and incorrectly specified assertions, change the appropriate code, and verify the fix when
simulation is rerun.
They identify holes in the process by identifying areas of the design that have not yet been sufficiently
verified.
They help to direct the verification effort by indicating what to do next, such as which directed test to
write or how to expand constrained-random testing.
They provide a quantitative measure of verification progress that helps gauge when verification is
thorough enough to tape out.
Code coverage
- Statement coverage
- Block coverage
- Line coverage
- Finite state machine (FSM) coverage
- State coverage
- Transition coverage
- Sequence coverage
- Branch coverage
- Path coverage
- Toggle coverage
- Condition coverage
Functional coverage
- Cover properties
- Cover groups
- Assertion coverage
The most widely used form of coverage is code coverage, an automated process that reports whether all
of the code in an RTL design description was exercised during a particular simulation test or set of tests.
Although some tools may equate code coverage with line coverage, line coverage by itself is of limited
value. There may be many ways to reach a particular line of code, and VCS' more sophisticated coverage
metrics allow every possibility to be tracked.
Code coverage was the first verification capability outside of the RTL code itself that VCS supported
natively. Unlike bolt-on code coverage tools, VCS does not require any special pragmas to be added to the
RTL code. In addition, the overhead of running code coverage in VCS is low enough that SoC verification
teams can turn on the metrics for every simulation test, even during regression runs.
While code coverage has significant value, it should be supplemented by functional coverage to provide
metrics that have specific meaning to the verification team. Designers and verification engineers can use
SystemVerilog or OpenVera to explicitly specify functional coverage points for their design that are then
tracked by the simulator along with the other forms of coverage. This allows the team to check whether
tests are exercising the right areas of the design as per the test plan and to measure whether all possible
combinations of stimulus were tried.
SystemVerilog and OpenVera allow design or verification engineers to write cover properties, which look
very much like assertions but represent legal corner-case behavior that the engineers want to track rather
than illegal behavior that should generate an error if it occurs. VCS automatically tracks and reports
whether the specified conditions were exercised along with other forms of coverage.
Cover groups support higher-level functional coverage, for example which values of a range are exercised.
Cover groups are most commonly specified in the testbench rather than the design, and are often used to
judge whether constrained-random stimulus is reaching all desired categories of behavior. Examples
include tracking opcodes in an instruction stream and monitoring transaction types on a complex bus.
VCS supports the cross-coverage constructs of SystemVerilog and OpenVera, for example, to track all
combinations of opcodes and operand types or transaction types and packet sizes.
VCS also provides assertion coverage, including tracking which assertions passed and which failed during
each simulation test. VCS has an optional mode in which coverage points within assertions can be
automatically tracked. This is very useful to detect situations in which assertions passed simulation only
because the logic related to the assertions (sometimes called assertion preconditions) was not exercised.
The unified coverage features of VCS allow all of the coverage metrics to be generated for a single test,
combining code, functional and assertion coverage together using a user-specified weighting for each
metric. Further, the results for multiple tests can be merged together to yield composite metrics across all
the tests in a regression suite. This provides a comprehensive, quantitative measure of verification
completeness that helps the development teams make the difficult decision of when to tape out.
When the complete chip is assembled, constrained-random simulation running with assertions and a wide
range of coverage metrics are essential to verify the design thoroughly and to track verification progress.
Whenever coverage is not sufficient, the verification team modifies constraints or perhaps writes a few
directed tests to fill in the coverage holes. This approach allows the team to reach their coverage goals
more quickly and more predictably.
The chip-level testbench may contain SystemC models for some verification components. In other cases,
for example, when doing performance measurements, some portions of the RTL design might be replaced
by SystemC models. VCS also supports this approach, since it natively compiles SystemC as well as RTL
and verification code.
The complete development process happens entirely within VCS, so that the development team can take
advantage of advanced verification techniques without paying for it in terms of performance, memory
utilization or complicated setup.
Using the advanced capabilities of VCS requires a comprehensive, unified methodology to make the most
effective use of each verification technique. The Synopsys Reference Verification Methodology (RVM)
provides just such guidance, covering how to use assertions and coverage, build a sophisticated
constrained-random testbench and develop reusable verification components. RVM is fully compliant
with the industry-standard methodology documented in the book Verification Methodology Manual for
SystemVerilog4, co-authored by experts from ARM and Synopsys.
Conclusion
VCS is leading the industry from the era of inefficient bolt-on verification tools to built-in testbench,
assertion and coverage capabilities. With these technologies encapsulated, VCS is much more than a
simulator; it truly is the verification environment. VCS' native support for verification can yield up to a 5x
performance speedup and a 3x reduction in memory usage over bolt-on tools. VCS enables the thorough
verification of complex SoC designs, in much less time than with other tools and methods. The result is a
predictable verification process with a far higher likelihood of first-silicon success, the goal of every project
manager, designer and verification engineer.
700 East Middlefield Road, Mountain View, CA 94043 T 650 584 5000 www.synopsys.com
Synopsys, VCS and Vera are registered trademarks of Synopsys, Inc. Magellan and OpenVera are trademarks of Synopsys, Inc.
All other trademarks or registered trademarks mentioned in this paper are the intellectual property
of their respective owners and should be treated as such. All rights reserved. Printed in the U.S.A.
2006 Synopsys, Inc. 02/06.CC.06-14014
11