Beruflich Dokumente
Kultur Dokumente
p
hp?article=transeda%2Fch-10.html
Verification Architecture
for Pre-Silicon Validation
Introduction
The dramatic increase in the number of gates that are available in modern integrated circuits coupled with
the use of IP cores and advances in design re-use methodologies are contributing to larger, more complex
and highly integrated designs. This increase in complexity results in designs that take increasingly more
effort and time to verify. As explained in earlier chapters, verification tasks commonly accounts for 50% to
80% of the chip''s development schedule.
RTL (Register Transfer Level) coding in Verilog and/or VHDL is the implementation methodology of choice for
most chips. RTL simulation is the most common verification methodology for these chips. This chapter
outlines how dynamic simulation based pre-silicon validation can be performed with a well thought out,
proven verification architecture.
First of all, let us first consider the following facts:
It takes an inordinate amount of time to build the test benches (a.k.a. "test jig") to test every level of
integration starting at the module level. Even if the modules are thoroughly tested in their own right,
integration verification is mandatory and is a daunting task. A chip may have standard bus or interconnect
interfaces and interface verification is a difficult task. A chip may have to work with other components in the
system and unexpected system component interaction can result in system failure.
As a result we observe that, in real practice, most chip development groups perform some module-level
verification, preferring to spend their time in full-chip or system-level verification. While 70% of the chip
development time is spent in verification, 65% of that verification time is spent in the following three
activities:
� Test creation
� Test execution
� Results checking.
Test creation and results checking take up the majority of time spent on verification. Test execution time
varies greatly depending on the type of design, the simulator or hardware emulator used. In the case of
hardware emulators there is an overhead in compiling and maintaining the hardware that needs to be traded
off with the raw performance achieved in simulated clocks over software simulators.
Raw performance in terms of the numbers of clocks per second simulated does not necessarily translate well
into a high-quality verification environment. Designers need to focus on quality instead of quantity of
simulation. This means that a designer needs to consider how high-quality tests can be manually or
automatically created and what trade-offs can be made.
The key objective of the verification effort is to ensure that the fabricated chip works in the real environment.
Quite often development groups wait until prototypes are built in order to find out if this objective was met in
a phase called "prototype validation". The cost of not having met this objective can be very severe.
This chapter outlines a strategy for pre-silicon validation that has been used by many blue-chip development
groups to successfully deliver first-pass shippable silicon.
While first-pass shippable silicon also requires that circuit, timing, package and such other aspects of
implementation are correct as well, we also need to consider functional verification and validation. Functional
verification is one of the primary tasks that normally consumes the major amount of time and effort for the
development project group.
Pre-silicon Validation
Pre-silicon validation is generally performed at a chip, multi-chip or system level. The objective of pre-silicon
validation is to verify the correctness and sufficiency of the design. This approach typically requires modelling
the complete system, where the model of the design under test may be RTL, and other components of the
system may be behavioral or bus functional models. The goal is to subject the DUT (design under test) to
real-world-like input stimuli.
The characteristics of pre-silicon validation are as follows:
Concurrency
Most complex chips have multiple ports or interfaces and there is concurrent, asynchronous and independent
activity at these ports in a real system. A system-level verification environment should be able to create and
handle such real-world concurrency to qualify as a pre-silicon validation environment. Concurrency needs to
be handled in both the test controller and the bus/interface models used.
Some models will return data when a transaction completes, so the test controller or environment can do
data checking. Other models require the expected data to be provided up front so the model can do data
checking when the transaction completes. Such behavior impacts the test checking methodology and the
amount of concurrency that can be generated.
Results Checking
While it may be relatively easy to generate activity or stimulate the different ports or interfaces of a chip, the
difficult part is to implement an automated results or data checking strategy. In a system-level pre-silicon
validation environment, designers are relieved of maintaining or keeping track of the data in test code,
simplifying the task considerably for a multi-ported system.
� Some processor BFMs simply drive transactions on the bus, but do not automatically handle
deferred transactions and retries. They have no concept of cache coherency. The test
generator, or some layer of intelligence added by the user, has to handle all of the bus-
specific details that could affect the order of transaction completion, and thus, the final
"correct" data values.
� Some models can generate multiple transactions concurrently; some perform only one
transaction at a time. Models that generate multiple transactions simultaneously may
complete these transactions in an arbitrary order depending on bus timing and other
concurrent traffic. If the completion order is non-deterministic, then the test generator will
have to gain visibility into the system to determine the final state.
� Some models represent "real" devices, and always generate or respond to signals on the
bus with the same timing. To fully validate adherence to a bus protocol, the system must be
tested with all possible variations in cycle timing that is allowed by the device specification.
This means that the test generator should be able to change the timing of the models, and
to randomly vary delays and cycle relationships, such as data wait states and snoop stalls.
Ease of Use
With the considerable pressure that is already applied to shortening the timescales that are used on most
projects, design teams need to focus on completing designs and delivering shippable chips. It therefore does
not make much sense to send design engineers to a training class in order to learn a new language or
methodology to implement a verification strategy. A better alternative is to have a verification environment
that is easy to use and intuitive as this will have a dramatic effect in increasing a designer''s productivity
level.
Debugging
The manifestation of a problem occurs further down in simulation time from the root cause, because of the
time it takes to propagate the flaw to an observable point. Ideally, but not always practical, it is best to catch
a problem closer to the root cause. So, a verification environment should facilitate analysis and make it easy
to trace back to the root cause of the problem.
Configurability
A system-level pre-silicon validation environment should be easily configurable, since multiple different
configurations will need to be tested for a thorough system-level verification.
Figure 10-1
Verification components
The major components for this verification approach are:
Conclusion
System-level verification requires a well thought out, proven strategy to achieve shippable first-pass silicon.
This chapter has outlined the key features to consider about a system-level verification environment and the
key components of a verification architecture to achieve pre-silicon validation with the ultimate goal being
shippable first-pass silicon.
http://www.netrino.com/Embedded-Systems/How-To/Timer-Counter-Unit