Sie sind auf Seite 1von 11

Verify More in Less Time with VCS

Using VCS native testbench, assertion and coverage


capabilities to find complex design bugs quickly

Feburary 2006

2006 Synopsys, Inc.

Introduction
RTL verification dominates most chip development schedules, with electronics firms pouring up to 70% of
their engineering resources into the task. Unfortunately, this high level of investment hasn't always paid off
in terms of first-silicon success. According to a recent survey1, 71% of all IC designs fail on first silicon
and require at least one re-spin. 60% of these faulty designs have functional errors that could have been
detected with more thorough RTL verification.
Why aren't more designs successful on first silicon given the high level of investment? The answer lies in
the ever-increasing growth of design complexity and the corresponding exponential increase in verification
complexity. The experience of microprocessor development at Sun Microsystems2 illustrates this trend.
During a four-year period, design complexity (measured by number of lines of RTL code) for its SPARC
microprocessors increased by 6x, roughly in line with Moore's Law. However, verification complexity
(measured by number of logic simulation cycles) grew by 100x during the same period.
Keeping up with this staggering growth in verification complexity requires new approaches. This paper
describes how the VCS RTL verification solution from Synopsys runs up to 5x faster than traditional
approaches, enabling fundamental improvements in verification efficiency and thoroughness even for the
most complex system-on-chip (SoC) projects. The result is a more predictable verification progress,
converging more quickly toward coverage goals and taping out with a much greater chance of first-silicon
success.

From Bolt-On to Built-In Verification


The current era of functional verification began with the advent of HDL simulation in the late 1980s. HDL
simulators became the primary tool in the verification engineer's toolkit by the early 1990s. In subsequent
years HDL simulator performance and capacity advanced significantly, but verification engineers found
that additional capabilities were needed. This resulted in the emergence of numerous bolt-on tools that
worked with simulation, for example, coverage analysis, assertion checking, and testbench automation.
These tools enabled new methodologies that helped address the growth in verification complexity.
However, these bolt-on tools limited the performance of simulation and created an inefficient, fragmented
solution requiring multiple methodologies and vendors. Project engineers were responsible for integrating
the tools together and trying to develop a coherent verification methodology-a huge investment of time
and effort. As large SoC designs emerged in the late 1990s and early 2000s, the weaknesses of this
fragmented verification approach began to draw the attention of the chip development community. It
became clear that a unified, comprehensive solution was needed.
Synopsys is the leader in the trend away from fragmented verification tools and methods, and has led the
industry to the next era of functional verification. The built-in capabilities of Synopsys' VCS enable a
unified verification approach in which the best of the advanced technologies, including testbench
automation, assertion checking and coverage analysis, are integrated with high-performance simulation.
The key is Synopsys' Native Testbench (NTB) technology, which compiles design, testbench, assertions and
coverage together. NTB unites disparate tools and methodologies in a single, high-performance solution.
Every VCS user is able to deploy the most advanced verification methodologies available, including
constrained-random test generation, powerful assertion checking, and comprehensive coverage analysis.
The benefits of these advanced verification methodologies are clear: engineers can detect more bugs in
less time, and achieve a higher rate of first-pass silicon success. Synopsys users are experiencing these
benefits today: a recent survey of Synopsys verification customers3 showed that 90% of their designs
were functionally correct on first silicon.
Unifying key verification technologies into a single tool has many advantages in terms of performance and
ease of use. The remainder of this paper explores the features and benefits of VCS' built-in testbench,
assertion and coverage technology, and how they can help verify complex designs more thoroughly in less
time.

2006 Synopsys, Inc.

Compiler Optimizations
The biggest disadvantage of having multiple verification tools bolted together is that each tool compiles
or interprets its own language independently. For example, an independent code coverage tool and a
simulator analyze the RTL design separately, reading the code and building their own independent models.
This is inefficient during the analysis/compilation phase and can lead to differences of interpretation for
certain RTL constructs in the design.
The problem is more serious when different pieces of code are analyzed with different tools. For example,
a testbench automation tool analyzes and compiles the testbench code while the simulator compiles the
design itself. This independent analysis means that a single compiler cannot perform optimizations across
the design and testbench. Many decades of research have yielded highly sophisticated forms of compiler
optimization, but these require visibility into the complete body of code for maximum effect.
Synopsys has leveraged this vast compiler experience to apply a wide range of optimizations to RTL code
compiled by VCS. These optimizations speed up RTL simulation and save memory for the design model.
However, these optimizations cannot be extended to testbench, assertion or coverage code as long as
these portions of the verification environment are analyzed by separate tools.
The inclusion of built-in testbench, assertion and coverage capabilities within VCS allows its single NTB
compiler to apply code optimizations to both the design and verification components of the simulation
environment. The same level of performance cannot be achieved by separate compilations, for example,
with a bolt-on verification tool that compiles testbench or assertion code but not the design.

Elimination of Unnecessary Signals


One of the most valuable optimizations performed by the VCS compiler is the elimination of design signals
that are not needed in the verification environment. In a typical RTL design, there is a certain amount of
signal redundancy as well as many intermediate signals that help with code readability. A smart compiler
can often optimize away many such signals. The less that a simulator has to track, the faster it will run, so
this form of optimization can lead to significant performance improvement.
The problem arises when simulation optimization algorithms can't determine whether signals are really
needed. This is the case whenever bolt-on tools access the RTL design. Since these interfaces may
require access to any signal in the design, it is not safe to eliminate signals during the optimization
process. The testbench, in particular, might need to read or write any RTL signal at any time. As with other
forms of optimization, the compiler needs to see the full scope of the design and verification environment
for best results.
By directly compiling the testbench, assertions and coverage code, as well as built-in functions such as the
Verilog Change Dump (VCD) facility, VCS can determine which design signals need to be retained.
Eliminating both unnecessary signals and the structures required to read and write these signals from
bolt-on tools gains simulation speed and reduces the memory needed for the simulation model. This also
helps VCS apply optimizations uniformly across the design, testbench, assertions and coverage code.
VCS eliminates unnecessary signals as an intrinsic part of its single compilation pass; no extra steps are
needed to learn which signals are needed and then try to optimize them away. Further, many signals for
which access is needed are further optimized by the VCS compiler; bolt-on tools do not have the option to
perform such optimizations.

Other Performance Benefits


Another performance advantage of VCS arises from faster access to design data in cache memory during
verification. The ability for all components of the verification environment to share a common model means
that most accesses happen in a smaller region of memory than for the independent models and
representations produced by bolt-on verification tools. Thus, with VCS it is more likely that data will be
available in cache rather than main memory.

2006 Synopsys, Inc.

Finally, the NTB integration of testbench, assertion and coverage capabilities within VCS eliminates the
overhead of communication between the simulator and bolt-on tools. This inter-tool communication
traditionally has been accomplished by the Programming Language Interface (PLI) in Verilog designs and
by similar vendor-proprietary approaches for VHDL. Since VCS includes all components of the verification
environment, they share the same model and so data value changes are simply written and read from the
common model. Elimination of PLI helps speed up simulation, although experience has shown that VCS'
optimizations are a much larger factor.
Synopsys measurements on a wide range of customer chips have shown that the combined effect of
optimizations and communication within VCS can typically improve simulation speed up to 5x over bolt-on
tools. As shown in Figure 1, the amount of speedup depends upon the percentage of simulation time
spent in the design versus the testbench. Because of the VCS optimizations that eliminate unnecessary
signals, usually the design and built-in testbench together run faster than the design itself does when
hobbled by a bolt-on testbench.

Simulation Time (Normalized)

100%

80%

60%

40%

20%

0%
Speedup
6.6X
Testbench

4.4X

2.7X

Design

1.4X

Unified Testbench and Design

Figure 1: Built-in testbench support can speed up the entire simulation, including the design.

Savings in Memory Utilization


The common design and verification models enabled by VCS also lead to significant reductions in the
amount of memory needed for verification. An example is shown in Figure 2, in which a testbench
automation tool must maintain a model (data structure) for the signals in the design that it reads or writes.
Of course, the simulator has its own model for the design, and so this leads to duplication of design data
and a resulting overhead in memory usage. Figure 3 shows the memory efficiency gained by VCS.

2006 Synopsys, Inc.

Code Coverage
Tool

RTL Design
Data 1

Testbench

Data 2

Data 1
Data 2

Drivers

Data 3
Data 4

Data 7

Monitors

Data 5

PLI

Data 9

Data 6

Data n

Data n-1
Data n

Testbench Automation Tool


Assertion
Tool

Simulator

Figure 2: Simulator bolt-on tools create data redundancy.

RTL Design
Testbench
Drivers

Data 1
Data 2
Data 3

Code
Coverage

Data 4
Data 5
Data 6
Monitors

Assertions
Data n-1
Data n

VCS

Figure 3: VCS built-in verification technology eliminates


data redundancy.

VCS is very efficient in its storage allocation for the design and verification data, another benefit of its
sophisticated compiler technology. Synopsys measurements and customer experiences have shown that
VCS can reduce total memory consumption by up to 3x over the combination of a traditional simulator and
bolt-on verification tools.

Additional VCS Benefits


The combination of higher performance and lower memory usage means that VCS can run chip verification
on older CPUs with less memory than competitive solutions, thus preserving the huge investment that
project teams make in their server farms. Likewise, the same advantages of VCS allow larger projects to
run on a given platform than other tools can handle. This gives VCS greater capacity to handle very large
designs with complex verification environments.

2006 Synopsys, Inc.

VCS also has significant ease-of-use advantages. Learning how to use multiple verification tools is always
a challenge, made especially difficult because each tool has its own compile-time switches, run-time
options, command set and user interface. Even after the chip verification team does all the work to
engineers are still faced with learning each tool and its user interface.
In addition to the performance gain and memory reduction, building verification functionality into a
simulator also provides much greater ease of use. In the case of VCS, options to control the design,
testbench, assertions and coverage can all be specified at the same time in a common format. Simulation
results can be viewed using a common interface-the Discovery Visualization Environment (DVE)-minimizing
the learning process for design and verification engineers, and allowing them to become productive in
much less time.
Naturally, a set of verification capabilities developed by a single vendor will be better integrated, better
documented, and architected under the guidance of a shared vision and a unified, coherent methodology.
In addition, dealing with a single verification vendor makes it easier to evolve the verification environment,
report and track any problems encountered, and obtain support when needed.

VCS' Testbench Capabilities


VCS provides built-in support for testbenches written in SystemVerilog, the industry's first hardware design
and verification language, the OpenVera hardware verification language, or a combination of the two.
Native simulator support for testbenches is critical to achieve the benefits outlined thus far, while language
support for testbench constructs is essential for coding efficiency and the use of advanced verification
techniques.
Hardware verification languages, including OpenVera, exist precisely because RTL design languages are
not expressive enough for modern verification techniques such as constrained-random stimulus
generation. Verilog was enhanced to produce SystemVerilog partly to add these same verification
capabilities. Accordingly, expanding a simulator such as VCS into a full-fledged RTL verification solution
requires constrained-random stimulus-generation support.
Verification engineers have known for many years that they can't possibly think of all possible design
scenarios and manually write directed test cases to verify them. The solution has been to introduce
randomness into stimulus generation, which both automates the process and exercises corner cases that
might not be hit by hand-written directed tests. RTL languages are good enough for coding basic directed
tests, but not powerful enough to use random testing as the foundation for verification.
However, purely random functions can only be used to generate random values unrelated to previous or
subsequent values. Therefore, they cannot be used to generate significant stimulus values, such as the
destination address in a network packet or the instruction opcode in an instruction. These require
constrained-random stimulus generation, which uses constraints to guide the generation of the values.
Constraints can be efficiently described in any order, then easily added, removed or modified to obtain
different data streams, all without requiring any modification to the stimulus-generation code.
As shown in Figure 4, the constrained-random approach converges to complete verification more quickly
and predictably than hand-written directed tests. An initial effort is required to set up the constraints,
typically followed by some refinement, and then the automatically-generated tests rapidly converge on the
coverage goals.

2006 Synopsys, Inc.

100%

Percent of
Functionality
Tested and
Design Quality

ConstrainedRandom
Approach

Directed
Approach

Time Savings
Time
Figure 4: Constrained-random verification is much more efficient than writing directed tests.

Verification engineers strive to complete testing of all functionality in a chip design before tapeout. What
actually happens on many projects is that a tapeout deadline is enforced even when verification is not
complete, leading to a low-quality design with many problems discovered in silicon. The greater rate of
verification convergence provided by the constrained-random approach makes verification completion
much more likely prior to tapeout and yields a much higher-quality result whenever tapeout occurs.

Synopsys' Constrained-Random Technology


Constrained-random capability requires the ability to specify constraints efficiently plus constraint-solver
engines to generate high-quality stimulus within the bounds of these constraints. Synopsys provides
industry-leading capabilities in both of these areas. OpenVera is a powerful hardware verification language
that provides all the constructs necessary to describe constraints in an efficient and natural manner.
OpenVera strongly influenced the verification capabilities of SystemVerilog, including constraint specification.
Synopsys' Vera testbench automation tool pioneered a very powerful constraint solver that generates
comprehensive, high-coverage stimulus even in the presence of complex constraints. More recently,
Synopsys' Pioneer-NTB leveraged the Vera technology to add support for SystemVerilog as well as
OpenVera.
The same production-proven constraint solver used in Vera and Pioneer-NTB is included in VCS, so that
testbenches running entirely within VCS have the same capabilities. The constraint solver actually uses
multiple engines and analyzes constraints in parallel. This allows solutions to combinations of constraints
across verification objects. Many other constraint solvers handle only one constraint at a time, serially,
which severely limits their ability to find solutions to complex, interacting sets of constraints and often
generates false constraint conflicts that are time-consuming to diagnose and work around.
The VCS solver never reports false constraint contradictions; it is guaranteed to find a constraint solution if
one exists and if it can be found in the time allocated. If constraint conflicts prevent a solution, VCS makes
it easy to diagnose and fix the error by calculating and displaying the minimal conflicting subset of constraints.
Given the complex nature of SoC protocols, many constraint-driven stimulus sequences simply cannot be
set up, debugged, nor generated by any non-Synopsys tool or method.

2006 Synopsys, Inc.

VCS' Assertion Capabilities


One key aspect of effective verification is getting the designers involved in the design-for-verification
(DFV) process early. Effective DFV entails a number of changes in the way that designers do their job and
interact with the verification team. These may include following specific design rules to make verification
easier or employing advanced RTL coding methods-such as using the SystemVerilog design constructs
supported by VCS-that reduce the likelihood of errors in the code.
One essential part of DFV is a way for designers to express their intent concisely and naturally as part of
the design process, so that it can be objectively verified. They express this intent in the form of assertions,
ideally added by the designers in the process of writing their RTL code. Verification engineers often add
additional assertions during the verification process, especially on interfaces for which specifications are
available.
Assertions make block-level verification with simulation or formal analysis more effective, another important
part of good DFV practice. During chip-level simulations, assertions pinpoint design errors at their source,
reducing the time it takes to diagnose and fix bugs. Assertions also have value just by their very existence:
since they document intent while it is still fresh in the designer's mind, they are very valuable if the design
is reused or even if the original designer revisits the code after an extended period of time.
Various forms of assertions and assertion-checker libraries have been used by leading-edge chip
development teams for quite a few years. Simple assertions can be written directly in RTL code, although
capabilities are limited. More sophisticated forms of assertions have usually been implemented as plug-in
verification components. These suffer from the same performance degradation and memory overhead
problems as other bolt-on tools and models.
VCS completely avoids these issues by natively supporting two powerful formats: SystemVerilog assertions
and OpenVera assertions. These assertions are read by the same compiler that handles the RTL and
testbench code, so that the full range of compiler optimizations can include assertions as well as the design
and testbench. In addition, both of these assertion formats can be specified in the testbench, in a separate
file, or in-line within the RTL code itself, taking advantage of the context of the surrounding design.
These same assertions can be used as properties with formal analysis tools such as Synopsys' Magellan
to enable even more extensive verification. This avoids having to re-write assertions in some specialized
property language to take advantage of formal tools. The ability to leverage a single description for
multiple verification approaches saves time and effort.
Synopsys includes with VCS a library of assertion checkers for common types of design structures and
interface rules. These checkers provide a very valuable way to capture complex assertions without having
to write a lot of code. Since engineers can make mistakes when writing assertions just as they do when
writing design or testbench code, a pre-verified library reduces the number of incorrectly-specified
assertions. Available in both SystemVerilog and OpenVera, the assertion checkers are optimized for
efficient simulation with VCS and formal analysis with Magellan.
Synopsys also provides a library of assertion IP for standard protocols such as PCI, PCI-X, AMBA and
USB. Capturing the complete set of protocol rules with assertions enables both simulation and formal
analysis for verification of designs using these standard interfaces. Both the VCS assertion checkers and
assertion IP contain functional coverage points that supplement those specified by design and verification
engineers.

2006 Synopsys, Inc.

Assertions should at least print out an error message if they are violated during simulation, but they can do
much more. For example, a testbench can report a test failure whenever an assertion fails, even if the
results-checking passes. In other cases, the testbench might want to detect and react to assertion failures
in specific ways, such as dumping out contents of registers that can help with debugging the assertion
violations.
These techniques require close interaction between the testbench and the assertions, which adds
considerable overhead if the design, testbench and assertions are running in different bolt-on tools. Since
all three components can run within VCS, all interaction remains within the simulation kernel and so there's
no performance impact. Thus, VCS supports fast, real-time reactivity to assertion violations.
Finally, VCS provides several features to help debug assertions. With the visualization capabilities of DVE,
engineers can easily see why assertion violations occur. This makes it possible to distinguish between
design bugs and incorrectly specified assertions, change the appropriate code, and verify the fix when
simulation is rerun.

VCS' Coverage Capabilities


Coverage metrics serve three critical roles in all verification environments:
n
n
n

They identify holes in the process by identifying areas of the design that have not yet been sufficiently
verified.
They help to direct the verification effort by indicating what to do next, such as which directed test to
write or how to expand constrained-random testing.
They provide a quantitative measure of verification progress that helps gauge when verification is
thorough enough to tape out.

VCS provides a comprehensive set of unified coverage metrics:


n

Code coverage
- Statement coverage
- Block coverage
- Line coverage
- Finite state machine (FSM) coverage
- State coverage
- Transition coverage
- Sequence coverage
- Branch coverage
- Path coverage
- Toggle coverage
- Condition coverage
Functional coverage
- Cover properties
- Cover groups
- Assertion coverage

2006 Synopsys, Inc.

The most widely used form of coverage is code coverage, an automated process that reports whether all
of the code in an RTL design description was exercised during a particular simulation test or set of tests.
Although some tools may equate code coverage with line coverage, line coverage by itself is of limited
value. There may be many ways to reach a particular line of code, and VCS' more sophisticated coverage
metrics allow every possibility to be tracked.
Code coverage was the first verification capability outside of the RTL code itself that VCS supported
natively. Unlike bolt-on code coverage tools, VCS does not require any special pragmas to be added to the
RTL code. In addition, the overhead of running code coverage in VCS is low enough that SoC verification
teams can turn on the metrics for every simulation test, even during regression runs.
While code coverage has significant value, it should be supplemented by functional coverage to provide
metrics that have specific meaning to the verification team. Designers and verification engineers can use
SystemVerilog or OpenVera to explicitly specify functional coverage points for their design that are then
tracked by the simulator along with the other forms of coverage. This allows the team to check whether
tests are exercising the right areas of the design as per the test plan and to measure whether all possible
combinations of stimulus were tried.
SystemVerilog and OpenVera allow design or verification engineers to write cover properties, which look
very much like assertions but represent legal corner-case behavior that the engineers want to track rather
than illegal behavior that should generate an error if it occurs. VCS automatically tracks and reports
whether the specified conditions were exercised along with other forms of coverage.
Cover groups support higher-level functional coverage, for example which values of a range are exercised.
Cover groups are most commonly specified in the testbench rather than the design, and are often used to
judge whether constrained-random stimulus is reaching all desired categories of behavior. Examples
include tracking opcodes in an instruction stream and monitoring transaction types on a complex bus.
VCS supports the cross-coverage constructs of SystemVerilog and OpenVera, for example, to track all
combinations of opcodes and operand types or transaction types and packet sizes.
VCS also provides assertion coverage, including tracking which assertions passed and which failed during
each simulation test. VCS has an optional mode in which coverage points within assertions can be
automatically tracked. This is very useful to detect situations in which assertions passed simulation only
because the logic related to the assertions (sometimes called assertion preconditions) was not exercised.
The unified coverage features of VCS allow all of the coverage metrics to be generated for a single test,
combining code, functional and assertion coverage together using a user-specified weighting for each
metric. Further, the results for multiple tests can be merged together to yield composite metrics across all
the tests in a regression suite. This provides a comprehensive, quantitative measure of verification
completeness that helps the development teams make the difficult decision of when to tape out.

Putting the Capabilities Together


The combined capabilities of VCS enable a complete RTL verification environment with a single tool.
Designers capture their designs in RTL form, adding assertions at the same time to capture their design
intent. Verification engineers typically get involved once a group of blocks is assembled into a major
sub-unit of the chip, often called a cluster.
At this stage, the verification engineers will establish a test plan, set up a testbench, define appropriate
constraints, and run constrained-random stimulus generation to verify proper operation of the cluster. They
may specify additional assertions for the design and establish links with the testbench to detect assertion
violations and react as desired. The verification team may also run code coverage and other coverage
metrics, although some teams wait until the full-chip level.

When the complete chip is assembled, constrained-random simulation running with assertions and a wide
range of coverage metrics are essential to verify the design thoroughly and to track verification progress.
Whenever coverage is not sufficient, the verification team modifies constraints or perhaps writes a few
directed tests to fill in the coverage holes. This approach allows the team to reach their coverage goals
more quickly and more predictably.
The chip-level testbench may contain SystemC models for some verification components. In other cases,
for example, when doing performance measurements, some portions of the RTL design might be replaced
by SystemC models. VCS also supports this approach, since it natively compiles SystemC as well as RTL
and verification code.
The complete development process happens entirely within VCS, so that the development team can take
advantage of advanced verification techniques without paying for it in terms of performance, memory
utilization or complicated setup.
Using the advanced capabilities of VCS requires a comprehensive, unified methodology to make the most
effective use of each verification technique. The Synopsys Reference Verification Methodology (RVM)
provides just such guidance, covering how to use assertions and coverage, build a sophisticated
constrained-random testbench and develop reusable verification components. RVM is fully compliant
with the industry-standard methodology documented in the book Verification Methodology Manual for
SystemVerilog4, co-authored by experts from ARM and Synopsys.

Conclusion
VCS is leading the industry from the era of inefficient bolt-on verification tools to built-in testbench,
assertion and coverage capabilities. With these technologies encapsulated, VCS is much more than a
simulator; it truly is the verification environment. VCS' native support for verification can yield up to a 5x
performance speedup and a 3x reduction in memory usage over bolt-on tools. VCS enables the thorough
verification of complex SoC designs, in much less time than with other tools and methods. The result is a
predictable verification process with a far higher likelihood of first-silicon success, the goal of every project
manager, designer and verification engineer.

1. Collett International Research, Inc. 2005 IC/ASIC Design Closure Study


2. Case study presented at the April 11, 2000 EDAC meeting by Anant Agrawal of Sun Microsystems
3. 2004 survey of Vera users by Synopsys, Inc.
4. Published in 2004 by Springer Science+Business Media, Inc.

700 East Middlefield Road, Mountain View, CA 94043 T 650 584 5000 www.synopsys.com

Synopsys, VCS and Vera are registered trademarks of Synopsys, Inc. Magellan and OpenVera are trademarks of Synopsys, Inc.
All other trademarks or registered trademarks mentioned in this paper are the intellectual property
of their respective owners and should be treated as such. All rights reserved. Printed in the U.S.A.
2006 Synopsys, Inc. 02/06.CC.06-14014

2006 Synopsys, Inc.

11

Das könnte Ihnen auch gefallen