Sie sind auf Seite 1von 24

W H I T E P A P E R

Applying Assertion-Based Formal Verification


to Verification Hot Spots

Ping Yeung, Ph.D.


Sundaram Subramanian
Mentor Graphics Corporation

w w w. m e n t o r. c o m
Introduction
The complexity of modern SoC designs has created a verification crisis. Engineers cannot imagine all
of the possible corner-case behaviors let alone write tests to exercise them. The only way to address
the increased complexity is to supplement traditional functional verification methods by combining
assertions, simulation, and formal techniques in a process called assertion-based verification (ABV).
Overall, ABV addresses two high-level verification challenges:
1. Improving the thoroughness of the coverage driven verification methodology.
2. Reducing the number of functional bugs missed by traditional verification methodology.
Various papers have discussed and addressed the first challenge[1,2,3,4]. They concur that adding ABV to
a coverage-driven methodology augments the base coverage metric with the coverage attained by
assertions. As assertions monitor the functionality of the design, they measure the thoroughness of the
directed or pseudo random test environment.
In this white paper, we address the second challenge: How can you use assertions and formal
verification to find bugs missed by traditional verification methodologies?
Based on our experience helping many design teams deploy assertions and formal verification [6,16,17],
we recommend deploying ABV (including formal model checking) on the most salient verification hot
spots in a design[19], following a seven-step, formal verification planning process[15]. By focusing ABV
on verification hot spots, a design team can adopt ABV incrementally as they continue to use their
simulation-based methodology. This has the added benefit of minimizing the risks involved with
adopting a new methodology while maximizing the return-on-investment.

Assertion-Based Verification
Assertions increase the observability of the design. When used in a simulation-based verification
environment, embedded assertions catch design issues locally, bringing them to the attention of the
users immediately as they happen. Assertions allow hard-to-reach, hard-to-verify logic and critical
functionality to be identified as coverage goals.
Formal verification with model checking technology[14] increases the controllability of the design.
Once a design is instrumented with assertions, formal verification can verify areas of concern, known
as hot spots. Model checking analyzes the RTL structure of a design and characterizes its internal
nature, and it targets corner-case behaviors directly. Each assertion violation discovered by model
checking is reported along with its counter-example. This uncovers functional errors that would have
been missed using traditional verification methodologies.
The benefits of ABV have become obvious and essential to many design teams. Standardized assertion
languages (such as PSL[9] and SVA [10]) and assertion libraries (such as OVL[11], QVL [12] and 0-In Checkers [13])
have lowered the barrier for adopting the ABV methodology. In fact, we have found that the majority of
assertions used are simple and that simple assertions work just as well as complex ones.

Verification Hot Spots 1


However, some design teams deploying ABV place too much importance on the details of the assertion
languages, the richness of the libraries, or the features of the ABV tools, elevating the potential for
frustration and costly mistakes. These “external” factors are not important until the verification
challenges and the hard-to-verify hot spots in the design are clearly identified and a formal verification
test plan is in place.

Formal Verification Planning


In addition to a simulation-based methodology using assertions, formal verification is used extensively
to tackle verification hot spots. By analyzing the structures of a design, formal verification can explore a
lot of potential scenarios and identify those which will cause the design to break. Below we summarize
the seven-step formal verification planning process elaborated thoroughly by Harry Foster [15].
1. Identify. Determine good candidate blocks for formal verification.
2. Document Interface. Create a table that describes the block’s interface signals.
3. Describe. Highlighting its major functions, describe the key characteristics of the candidate
block.
4. Capture Requirements. In a natural language, list the verification requirements for the
candidate block: the properties and the features that need to be verified.
5. Formalize Properties. Convert each of the natural language requirements into a set of formal
properties using either an assertion library or an assertion language, such as SVA or PSL.
6. Define Coverage. Define the coverage targets. They are helpful in determining the quality of
the constraints required for formal verification.
7. Select Strategy. Detail the strategy you intend to use during the formal verification process.

Verification Hot Spots


A verification hot spot is an area of concern within the design. It is typically difficult to verify because
it contains a great number of sequential states, and it is deeply buried in the design, making it difficult
to control from the inputs. To identify the common verification hot spots that designers experience, we
conducted a comprehensive review with each design team. During the review, the following questions
were used to lead the exploration discussion.
In the previous project:
• Where did you find the most bugs?
• Which verification approaches found the most bugs?
• What bugs were found after tape out?
In the current project:
• Which modules are difficult to verify?
• Which modules are you concerned about?
• Which modules are highly connected with the rest of the design?
• Where is the new logic in the design?

2 Verification Hot Spots


We recommend each design team conduct a similar review internally to explore the unique hot spots in
their designs. Knowing the verification hot spots is half the battle. A majority of the design teams[5,6,16,17]
we worked with understood that it is difficult to thoroughly verify the hot spots by simulation-based
methodology alone — the amount of simulation and the difficulty of creating sufficient scenarios are
simply prohibitive. As a result, formal verification is often deployed to handle the verification hot spots.
The goal of some companies is to create a verification kit that uses formal verification and simulation
effectively together to fully verify a verification hot spot. In this way, the expert knowledge of one user
can be transmitted to others. This knowledge includes such items as how to capture the right assertions
for a verification hot spot; what mix of formal and simulation analysis is appropriate; what input
constraining methodology is effective for the verification hot spot; what metrics need to be measured;
and what the coverage goals need to be.
In this paper, we will focus on four verification hot spots that we found to be in most designs. They are:
1. Resource control logic
2. Design interfaces
3. Finite state machines
4. Data integrity

Verification Hot Spots SDRAM Controller Cache Controller

Arbitration, Masking; multi-level,


Resource Control Logic
mutex relationships weighted, credit schemes
Multiple AHB, SRAM,
Design Interfaces Internal proprietary interfaces
SDRAM interfaces
Interface FSM with simple Complex, multi-level,
Finite State Machines
transitions hierarchical FSM
Linked-list structure, out-of-
Data Integrity FIFO without alteration
order store and retrieval
Table 1: Verification Hot Spots

Table 1 summarizes the verification hot spots of a SDRAM controller and a cache controller. In the
following sections, we are going to discuss these hot spots in detail, identifying their verification
requirements, the properties used to express them, and the appropriate coverage strategies. We have
captured the properties with both assertion languages and libraries. Many verification engineers favor
keeping both library and verification languages as distinct verification elements. The merits of the
different approaches should be evident from the examples. Finally, we present real-world bug
examples to highlight the type of bugs that could be found.

Verification Hot Spots 3


Resource Control Logic Hot Spot
Computation resources, system-on-chip buses, inter-connections, buffers, and memories are logic
structures usually controlled by arbiters and complex control logic. When creating directed tests for the
design, verification teams tend to focus on the high-level specifications. They seldom stress the corner
cases of the resource control logic; as a result, some problematic scenarios are not observed.
Requirements
1. The requests should be generated correctly based on the mask, the weight, the credit scheme, and so
on. This is done by using procedural descriptions to generate the “glue logic” associated with the
corresponding assertions.
2. The grant or enable signals should follow the specified arbitration scheme. For straight-forward
arbitration schemes (such as round-robin, priority, least-recently-used, and so on), the arbiter
checker from an assertion library [11, 12, 13] can be used directly.
3. The resource (bus, interconnect, memory) should be addressed by only one master at a time. This
can be done with an assertion language or with the mutex checker from an assertion library.
4. The resource should be de-allocated before it is allocated again. Semaphores lock and release the
common resources that do not allow multiple, simultaneous access. Such locks should be set before
an access.
Properties
For instance, there is a round-robin arbiter in the design. Hence, we would like to ensure:
• The round-robin arbitration scheme is followed correctly.
• At most one grant is given to the request(s) at a time.
SVA:
In this example, two properties are written. The first one, p_single_grant, ensures that only one
grant signal is asserted. The second one, p_round_robin, ensures the round-robin scheme is followed.
It is difficult to capture the arbitration scheme in SVA. It is more natural to use a mix of Verilog and
SVA code.
property p_single_grant;
@(posedge clk) disable iff (rst) $onehot0(gnt);
endproperty

// Round-robin check
// Default highest priority = 0 (channel 1, req[4])
integer p;
always @(posedge clk)
if (gnt != 0)
p <= gnt[4] ? 0 : (gnt[3] ? 4 : (gnt[2] ? 3 :
(gnt[1] ? 2 : (gnt[0] ? 1 : 0))));
else p <= 0;

4 Verification Hot Spots


reg [4:0] reqdly;
always @(posedge clk) reqdly <= req;

wire [9:0] reqx2 = {reqdly, reqdly};


wire [9:0] reqsl = reqx2 >> p;
wire [4:0] reqrb = reqsl[4:0];
wire [9:0] gntx2 = {gnt, gnt};
wire [9:0] gntsl = gntx2 >> p;
wire [4:0] gntrb = gntsl[4:0];

property p_round_robin;
@(posedge clk) disable iff (rst)
(gnt != ‘b0) |-> ( (gntrb < (reqrb & (~gntrb))));
endproperty

OVL (Verilog):
The arbitration checker in the OVL 2.0 library can be used. It ensures that grants are given to the
requesting channels only, and at most one grant is given at a time. The supported arbitration schemes
include fair or round_robin(1), FIFO(2), and least_recently_used(3). The arbitration rule parameter is 1;
it implies the round_robin scheme is used.
ovl_arbiter
#(.width(5), .arbitration_rule(1))
ovl_round_robin_check
(.clock(clk), .reset(rst), .enable(1’b1),
.reqs(req), .gnts(gnt));

QVL (Verilog):
This arbitration checker supports more functionalities than the OVL arbiter. The supported arbitration
schemes include fixed-priority(1), basic fairness(2), queue(3), round-robin(4), and least-recently
used(5). In addition, it also supports different request styles and parking for immediate grant of the
high-priority channel. Coverage information is collected automatically with SystemVerilog
covergroups and coverpoints.
qvl_arbiter
#(.width(5), .req_type(2), .gnt_type(4))
qvl_round_robin_check
(.clk (clk), .reset_n(rst_n), .active(1’b1),
.req(req), .gnt(gnt));

0-In Checkers:
This 0-In checker provides functionality that is equivalent to the QVL arbiter checker. The directive
approach makes specification easier. The bit-width for the arguments, clock, and reset signals are
inferred automatically.
// 0in arbiter –round_robin –req_until_gnt –req req -gnt gnt

Verification Hot Spots 5


Coverage Strategy
If the design is well modularized (i.e., the targeted control logic is self-contained), formal model
checking can be applied early—even before the simulation environment is ready.
In a typical system, resources can be requested at any time and allocated in any sequence. Verifying all
possibilities with simulation is tedious (if not impossible). Alternatively, by capturing the
environmental assumptions as assertions, formal model checking can exhaustively analyze all possible
scenarios and detect any potential resource conflicts. Once the system simulation environment is
ready, these assumptions can be validated in the system environment.
Bug Examples
BUG: Incorrect grant was given when the arbiter was loaded.

Figure 1. A channel arbiter in a multimedia processor core.

An arbiter checker from the QVL assertion library was used to instrument this channel arbiter in a
multimedia processor core. The goal was to ensure the requests from the channels were granted with a
round-robin scheme. During simulation, the arbiter was not heavily loaded. Formal verification was
able to stress the design more thoroughly. When multiple channels were requesting and the processor
went from BUSY to AVAILABLE, and simultaneously the processor received an interrupt, the arbiter
granted the data path to the wrong channel. This was found statically without any simulation. During
the bug review, the project team admitted that they would not have found this bug. It required multiple
events to happen simultaneously. Even when they achieved 100 percent coverage with their coverage
model, they will not have hit this scenario.

6 Verification Hot Spots


BUG: Mutual exclusive slaves were selected with certain addresses.

Figure 2. Address decoding in an ARM-based SoC design.

This bug occurred in an ARM-based SoC design. The zero_one_hot checker from the OVL assertion
library, the multiplexer checker from the QVL library, and multiple SVA properties were written to
instrument the design. Due to a coding error and incorrect address masking the logic in the address
decoder did not enforce selection of a single slave. As a result, with certain addresses, multiple slaves
could be selected. Static formal verification found this problem at the block level before any
simulation test was written to regress the block. After fixing this bug, the project team formally proved
that, under any possible circumstance, one and only one slave would be selected.

Design Interfaces Hot Spot


Inter-module communication and interface protocol compliance are infamous for causing a lot of
verification failures. When design teams first integrate all the blocks together for chip-level simulation,
the blocks usually fail to “talk” with each other. To catch this problem early, assertion-based protocol
monitors are added to instrument the on-chip buses and the standard and common interfaces.
Requirements
1. Ensure the correctness of the protocol on all bus interfaces, especially when there are different
modes of operation. As the popularity of standard assertion languages increases more assertion
monitors for standard bus interfaces will become available [12].
2. Ensure the transparency of the interfaces. This property ensures that the interface does not lose
transactions. For example, in a bus bridge design, transactions on the master interface should be
reflected at the target interface. These requirements can be expressed with an assertion language
using temporal properties.

Verification Hot Spots 7


Properties
For instance, in an ARM-based SoC design, the slave modules were connected to the AHB buses. The
interface of the slave modules can be instrumented with the AHB target monitor, qvl_ahb_target_monitor,
from the QVL monitor library [12].
qvl_ahb_target_monitor
#(1, // CONSTRAINT_MODE
32, // DATA_BUS_WIDTH
8, // NUMBER_OF_MASTERS
0) // CANCEL_FOLLOWING_TRANSFER_ON_ERROR_RESPONSE
tar_mon (
.hresetn (reset_n),
.hclk (clk),
.hselx (selx),
.haddr (addr[31:0]),
.hwrite (write),
.htrans (trans[1:0]),
.hsize (size[2:0]),
.hburst (burst[2:0]),
.hwdata (wdata[31:0]),
.hrdata (rdata[31:0]),
.hready_in (ready_in),
.hready_out(ready_out),
.hresp (resp[1:0]),
.hmaster (master[7:0]),
.hmastlock (1’b0),
.hsplitx (8’b00000000)
);

In addition, we should capture the handshake scheme between blocks. For instance, the following
handshake scheme is used. We want to:
• Ensure valid is asserted for 2 to 4 cycles.
• Ensure data is stable and known when valid is asserted.
• Ensure ack is asserted at the end of the data transfer.

Figure 3. Handshake scheme for an AHB interface.

8 Verification Hot Spots


SVA:
property valid_asserted;
@(posedge clk) disable iff (rst)
$rose(valid) |-> (valid)[*2:4];
endproperty

property valid_data_stable;
@(posedge clk) disable iff (rst)
valid |-> $stable(data) && !$isunknown(data);
endproperty

property valid_acknowledge;
@(posedge clk) disable iff (rst)
$rose(valid) |-> ( valid && !ack)[*1:3] ##1
( valid && ack) ##1
(!valid && !ack);
endproperty

OVL (Verilog):
assert_width
#(.min_cks(2), .max_cks(4))
valid_asserted
(.clk(clk), .reset_n(rst_n), .test_expr(valid));

assert_never_unknown
#(.width(32))
valid_data_known
(.clk(clk), .reset_n(rst_n),
.qualifier(valid), .test_expr(data))

assert_win_unchange
#(.width(32))
valid_data_stable
(.clk(clk), .reset_n(rst_n), .start_event(valid),
.test_expr(data), .end_event(ack));

assert_handshake
#(.min_ack_cycle(1), .max_ack_cycle(3),
.req_drop(1), .max_ack_length(1), .deassert_count(1))
valid_ack_handshake
(.clk(clk), .reset_n(rst_n), .req(valid), .ack(ack));

0-In Checkers:
// 0in assert_timer –var valid –min 2 –max 4
// 0in known –var data –active valid
// 0in constant –var data –active valid
/* 0in req_ack –req valid –ack ack –req_until_ack
–min 1 –max 3 –max_ack 1
–ack_assert_to_req_deassert_max 1 */

Verification Hot Spots 9


Coverage Strategy
Unlike testbench-oriented monitors developed using Vera or ‘e’, monitors written with an assertion
language can be used with both simulation and formal verification. In addition, assertion monitors
track simulation coverage and statistics information, which provides a means for gauging how
thoroughly bus transactions have been stimulated and verified.
Assertion monitors are useful for verifying internal interfaces as well. Such interfaces have ad hoc
architectures, so they are particularly prone to mistakes caused by misunderstanding and
miscommunication.
To validate the transparency of an interface, we must ensure that the transactions are interpreted and
transferred correctly. Simulation with assertions can validate the one-to-one mapping of the basic
transactions. However, to be thorough, we have to ensure that the illegal, error, and retry transactions
are also handled correctly.
Bug Examples
BUG: Abnormal transactions were lost in the bus bridge.

Figure 4. A processor to AHB bus bridge in a multi-core design.

A bus bridge in a multi-core design converted transactions from the processor bus (MBUS) to the
AHB bus and vice-versa. The AHB interface of the bus bridge was instrumented with the AHB master
assertion monitor, and the processor interface was instrument with a set of PSL properties. Constrained
random simulation regression was done first. The simulation coverage was close to completion.
Dynamic formal verification was applied to boost confidence and to close the gap. It leveraged the
simulation stimulus and exercised corner case behaviors. It was found that when the bus bridge
received a retry transaction from the AHB slave and the processor was handling an interrupt request,
the retry transaction was not communicated. The bus bridge did not retry the transaction until it was
completed or issued an error to the processor. The processor would assume the transaction was
completed successfully, but actually, it was lost by the bus bridge. The project team agreed that this
scenario would have never been exercised by the simulation environment.

10 Verification Hot Spots


BUG: Incorrect data was sampled by ratio-synchronous fast clock domain.

Figure 5. Data transfer from slow to fast clock domains.

This was a low-power design where the subsystem was running at a slower frequency. Data was
transferred from the slow clock (subsystem) domain to the fast clock (processor) domain. The clocks
are ratio synchronous and dynamically controlled by a configuration register. OVL and SVA assertions
were used to instrument the design. They ensured that the data valid signal was generated at the right
time and the data was stable when sampled. In simulation, the data valid signal and the data were
synchronized. The data was sampled by the fast clock domain at the perfect time. However, with
formal verification, a few corner cases were identified where the data valid signal was asserted at the
wrong time. As a result, unstable data was sampled. The problem was that these problematic scenarios
did not happen often. Even when they happened, the corrupted data would not have been detected for a
long time, making the targeted products unstable in the field.

Finite-State Machines Hot Spot


For verification purposes, we classify finite state machines (FSM) into two categories: interface FSM
and computational FSM. Interface FSM use I/O signals with well-defined timing requirements.
Examples of interface FSM are bus controllers, handshaking FSM, and so on. Computational FSM do
not involve signals with explicitly defined timing requirements. Examples of computational FSM are
FSM for flow charts, algorithmic computations, and so on.
Requirements
1. Typically, specifications for interface FSM are derived from waveform diagrams. So, it is crucial to
ensure that the FSM samples the input signals within the time period correctly and the response
signals are asserted within the output timing specification. It is natural to express these timing
requirements with an assertion language using temporal properties.

Verification Hot Spots 11


2. Typically, specifications for computational FSM are derived from control flow graphs, which are
common in engineering documents and standard specifications. To improve performance and/or
simplify implementation, a flow graph might be partitioned, flattened, re-timed, or pipelined into
multiple FSMs. But, regardless of the optimization performed, they should still mimic the flow
graph correctly. Implementing a control flow graph with assertions captures its specification in an
“executable” form. These assertions then ensure all of the FSM decisions and transitions are made
correctly. Such assertions can be implemented with a mix of procedural descriptions and temporal
properties.
Properties
For instance, based on a specification, the following control flow diagram was implemented in the
design. It does not matter how it was implemented; we want to ensure signals in the design assert in
the correct order (as described in the control flow diagram). In most cases, as the names in the
specification may not be reflected in the RTL code, we need to identify the RTL variables that signify
the flow of control.

Note: “load” must occur in exactly


one cycle (i.e., load_done must
assert one cycle after load_enable);
other items in the sequence must
occur in the correct order but can
take any number of clock cycles
(>=1)

Figure 6. Control flow diagram.

12 Verification Hot Spots


SVA
property seq_load;
@(posedge clk) disable iff(rst)
$rose(load_enable) |=> $rose(load_done);
endproperty

property seq_header;
@(posedge clk) disable iff(rst)
$rose(load_enable) ##[1:1] $rose(load_done) |->
##[1:$] $rose(hdr_ready)
endproperty

property seq_decode;
@(posedge clk) disable iff(rst)
$rose(load_enable) ##[1:1] $rose(load_done)
##[1:$] $rose(hdr_ready) |->
##[1:$] $rose(ctrl_enable || data_ld);
endproperty

property seq_store;
@(posedge clk) disable iff(rst)
$rose(data_ld) |-> ##[1:$] $rose(mem_cs)
##[1:$] $rose(return);
endproperty

OVL (Verilog)
assert_frame
#(.min_cks(1), .max_cks(1),
.action_on_new_start(`OVL_ERROR_ON_NEW_START))
seq_load
(.clk(clk), .reset_n(rst_n),
.start_event(load_enable), .test_expr(load_done));

assert_frame
#(.min_cks(1),
.action_on_new_start(`OVL_ERROR_ON_NEW_START))
seq_header
(.clk(clk), .reset_n(rst_n),
.start_event(load_done), .test_expr(hdr_ready));

assert_frame
#(.min_cks(1),
.action_on_new_start(`OVL_ERROR_ON_NEW_START))
seq_decode
(.clk(clk), .reset_n(rst_n),
.start_event(hdr_ready), .test_expr(ctrl_enable || data_ld));

Verification Hot Spots 13


assert_frame
#(.min_cks(1),
.action_on_new_start(`OVL_ERROR_ON_NEW_START))
seq_data
(.clk(clk), .reset_n(rst_n),
.start_event(data_ld), .test_expr(mem_cs));

assert_frame
#(.min_cks(1),
.action_on_new_start(`OVL_ERROR_ON_NEW_START))
seq_store
(.clk(clk), .reset_n(rst_n),
.start_event(mem_cs), .test_expr(return));

0-In Checkers
/* 0in assert_sequence –min 1 –max 1 –var
$0in_rising_edge(load_enable)
$0in_rising_edge(load_done) */
/* 0in assert_sequence –min 1 –var
$0in_rising_edge(load_done)
$0in_rising_edge(hdr_ready)
$0in_rising_edge(ctrl_enable || data_ld) */
/* 0in assert_sequence –min 1 –var
$0in_rising_edge(data_ld)
$0in_rising_edge(mem_cs)
$0in_rising_edge(return) */
Coverage Strategy
During simulation, assertions monitor activities inside the design and provide information showing
how thoroughly the test environment covers the design’s functionality. By capturing the properties of
the FSM using assertions, we can identify them easily, exercise them completely, and collect cross-
product coverage information during the simulation process. In addition, the implementation style of
the FSM has a direct impact on the effectiveness of verification [5].
When several FSMs are interacting with each other, it is important to ensure that they do not get
“trapped” in a corner-case behavior. Formal verification can be applied to check these situations.
However, formal model checking is not efficient with complex properties that are unbounded. Hence,
the high-level timing requirements of an FSM may have to be decomposed and captured within
several assertions.

14 Verification Hot Spots


Bug Examples
BUG: Missing de-allocation of address pointer.

Figure 7. DMA controller design block.

This was a DMA controller in a multi-layer AMBA SoC design. It is not easy to verify a DMA
controller exhaustively. DMA channels are allocated/de-allocated dynamically, and data transfers are
interleaved among channels. In this case, formal verification found a rare situation where data in
memory was corrupted. The problem occurred when more than one channel finished their transfers at
the same time. During the de-allocation process, one channel’s address pointer was de-allocated twice
and the other channel’s pointer was not de-allocated at all. As a result, when the same channel was
allocated again, corrupted data was transferred. DMA channels should never be allocated together
because their access is controlled by an arbiter. However, multiple channels can finish data transfers at
the same time.
BUG: Unexpected reset of memory discard counter.

Figure 8. Memory allocation and cluster logic in a cache controller.

Verification Hot Spots 15


This was the memory allocation and discard logic in a cache controller. Checkers from the OVL
library, ovl_no_overflow and ovl_no_underflow, were used to monitor all the tracking counters in the
design. In addition, SVA assertions were placed on the FSM to capture error conditions. With formal
verification, several scenarios were found that the discard counter could be reset to 0 unexpectedly. As
a result, it would allow certain transactions to be processed by the cluster controller that should have
been discarded. However, as the data fetched by the cluster controller would not be used, this issue
would not cause any simulation or design failure. It would become noticeable when the cache
controller was fully loaded. The overhead introduced by these redundant transactions would cause its
performance to degrade significant.

Data Integrity Hot Spot


Devices such as bus bridges, DMA controllers, routers, and schedulers transfer data packets from one
interface to another. Unfortunately, in a system-level simulation environment, data integrity mistakes
are not readily observable. Usually problems are not evident until the corrupted data is used. With
ABV, assertions check the integrity of data along the entire data transfer path. A lost or corrupt data
packet is detected immediately.
Requirements
1. Ensure key registers (program counters, address pointers, mode/configuration registers, status
registers, and so on) are programmed correctly and their data are never lost. Key registers contain
“precious” data that must be consumed before being over-written. Assertions can ensure these
registers are addressed, programmed, and sampled correctly. An assertion language can capture
these properties by probing into the registers hierarchically.
2. Ensure data going through a queue is not lost or corrupted. Data transferred through a queue must
follow the first-in-first-out (FIFO) scheme without any alteration. The best way to capture this
property is to use an approach that mixes both an assertion language and an assertion library. The
assertion language readily captures the enqueue and dequeue protocol conditions. Instead of
manually creating the data integrity assertion, we use a data integrity checker from an assertion
library [12] [13]. The checker ensures that:
• No more than n transactions are ever stored in the queue.
• No outgoing transaction is generated without a corresponding incoming transaction.
• Data written to the queue is not corrupted.
3. Ensure tagged data in the system is not lost or corrupted. In many designs (for example, data
switches and routers), data is stored temporarily before being transferred. A tag is a handle on the
data content. Structures that store and retrieve data can be verified using checkers (for example, the
memory and scoreboard checkers from an assertion library). The type of storage structure is not
important. The objective is to ensure the data stored in memory is not corrupted; the read/write
ordering is correct and the address has no conflicts.

16 Verification Hot Spots


Properties
For instance, in a bus bridge design we want to make sure that address and data values were
transmitted through the bridge in the correct order (FIFO). Assuming {ITRANS, IADDR, IREADY,
IRDATA} are signals on the transmit side of the bridge and {STRANS, SADDR, SREADY,
SWDATA} are signals on the receive side of the bridge. The properties we want to capture are:
• When ITRANS is valid, the address on IADDR is sampled. Then, it is sent out on SADDR
with STRANS enabled.
• When IREADY is valid, the data on IRDATA is sampled. Then, it is sent on SWDATA with
SREADY enabled.
SVA:
Verilog code is used to implement the address and data pointers. The incoming address and data values
are stored in the address and data arrays. The properties compare them with the actually received values.
always @(posedge clk)
if (rst)
addr_ptr <= ‘b0;
else if (ITRANS[1] == 1’b1)
addr_ptr <= addr_ptr + 1;
else if (STRANS[1] == 1’b1)
addr_ptr <= addr_ptr - 1;

always @(posedge clk)


if (rst)
data_ptr <= ‘b0;
else if (IREADY)
data_ptr <= data_ptr + 1;
else if (SREADY)
data_ptr <= data_ptr - 1;

always @(posedge clk)


if (ITRANS[1] == 1’b1)
addr_memory[addr_ptr] <= IADDR;

always @(posedge clk)


if (IREADY)
data_memory[data_ptr] <= IRDATA;

property addr_value_check;
@(posedge clk) disable iff (rst)
((STRANS[1] == 1’b1) -> (addr_memory[addr_ptr] == SADDR));
endproperty

property data_value_check;
@(posedge clk) disable iff (rst)
(SREADY -> (data_memory[data_ptr] == SWDATA));
endproperty

Verification Hot Spots 17


OVL (Verilog):
With the OVL FIFO checkers, the properties can be written easily. The depth of the FIFO is equal to
the latency of the bus bridge. The value check ensures that the value going into the FIFO matches the
value that emerged from it.
ovl_fifo
#(.depth(4), .width(24),
.full_check(1), .empty_check(1), .value_check(1))
addr_transfer_check
(.clk(clk), .reset(rst), .enable(1’b1),
.enq(ITRANS[1] == 1’b1), .enq_data(IADDR),
.deq(STRANS[1] == 1’b1), .deq_data(SADDR));

ovl_fifo
#(.depth(4), .width(32),
.full_check(1), .empty_check(1), .value_check(1))
data_transfer_check
(.clk(clk), .reset(rst), .enable(1’b1),
.enq(IREADY), .enq_data(IRDATA),
.deq(SREADY), .deq_data(SWDATA));

0-In Checkers
/* 0in fifo –depth 4
–enq (ITRANS[1] == 1’b1) -enq_data IADDR
-deq (STRANS[1] == 1’b1) -deq_data SADDR */
/* 0in fifo –depth 4
–enq (IREADY) -enq_data IRDATA
-deq (SREADY) -deq_data SWDATA */

Coverage Strategy
End-to-end, simulation-based verification methodologies transfer many data packages so that many of
the data integrity assertions are thoroughly exercised. However, simulation typically fails to stress test
storage elements by filling up the FIFO/memories, so pseudo-random simulation environments should
be guided by coverage information from the assertions. This can be done manually (by fine tuning the
random parameters) or automatically (using a reactive testbench that monitors the assertions).
To compound the issue, multiple queues and data streams might transfer and retrieve data at
unpredictable intervals. Simulation alone cannot verify all of the corner-case scenarios—there are
simply too many of them. So formal model checking is used to target the remaining data integrity
assertions. It can explore all legal combinations of transfer events to ensure the data is never corrupted.
When we examine the bugs found with this methodology, they are all related to complex combinations
of events initialized by different data transfers. An example is described in the next section.

18 Verification Hot Spots


Bug Examples
BUG: FIFO overflow after complex control sequences.

Figure 9. Flow control FIFOs between PHY-to-LINK interfaces.

This was a networking design with channels between the PHY and the LINK interfaces. Checkers
from the OVL and the QVL assertion libraries, ovl_fifo, qvl_multi_clock_fifo, and
qvl_multi_clock_multi_enq_deg_fifo were used to instrument all the FIFOs in the design. The FIFOs
were deep enough to hold data and control responses. Besides checking for the underflow and
overflow conditions, the FIFO checkers were used to ensure the integrity of the data passing through
the channels. Using formal verification a FIFO overflow issue was found. The run was started from a
simulation state where one particular FIFO was close to full. After a complex series of STOP and
SEND commands generated by formal verification, the FIFO was full. With another ingress packet,
the FIFO would overflow. When reviewing this issue with the project team, it was realized that this
issue was not uncommon with live traffic. They felt lucky that this was found before tape out.
BUG: Packet lost when data streams were multiplexed.

Figure 10. Time division multiplexing (TDM) data flow controller.

Verification Hot Spots 19


This was a data flow controller that interleaving-multiplexes multiple data streams into one. Checkers
from the OVL and QVL assertion libraries, ovl_fifo and qvl_data_used, were used to instrument the
data path elements. An SVA assertion was used to instrument the control logic. In simulation, data
from different streams flowed into the controller periodically. Idle packets did not need to be
introduced most of the time. When formal verification was first applied, the focus was on the control
logic of the design. Later we were concerned about the timing of the idle packet generator. After
focusing formal verification on the idle packet insertion of the TDM scheme, a bug was found when
the packets in the data streams were small. As a result, the egress FIFO was close to empty. In turn, it
triggered the idle packet generator. However, when the packets were small, the packing logic caused
multiple triggers to be generated incorrectly. As a result, some of the valid data packets were
overwritten during the TDM process. The project team was happy that it was found with formal
verification. They knew these type of scenarios were not stressed by simulation, but it was difficult to
create directed tests to focus on this area of the design.

Verification Results
We have been able to help dozens of project teams deploy ABV successfully by identifying and
focusing on the verification hot spots in their designs. The designs represent a wide range of
applications, including processor chip-sets, I/O hubs, gigabit Ethernet controllers, networking switches,
storage network controllers, and several ARM-based SoC platforms for mobile devices and consumer
electronics. The challenges presented by each of the four verification hot spots were experienced by all
the design teams. Table 2 summarizes the verification hot spots from two of these designs.

Verification
HyperTransport I/O Hub [6] ARM-based SoC Platform [20]
Hot Spots

AHB arbiter, AHB


Resource Control Logic Bus agent arbiter
interconnect
HyperTransport, PCI, USB,
Design Interfaces AHB, APB, PCI, USB, MII
LPC, IDE, MII
FSMs at interfaces and
Finite State Machines FSMs in Emac, bus bridges
DMA control
Proper read/write to memory, AHB2PCI bridge, memory
Data Integrity
DMA channels controllers
Table 2. Verification Hot Spots

ABV is a relatively new verification methodology. So when the majority of project teams deployed
ABV, they had already performed a significant amount of functional simulation. Hence, they did not
experience one direct benefit of the ABV method, which is finding bugs early in the design cycle,
concurrent with functional simulation. But, by concentrating on the hot spots of the designs, the teams
found difficult bugs. These results boosted the teams’ confidence that tape outs would be successful.
Table 3 gives a summary of bugs found at the verification hot spots of the two designs.

20 Verification Hot Spots


Verification
#Bugs Common Bug Scenarios
Hot Spots

Incorrect arbitration scheme; non-exclusive resource


Resource Control Logic 10
selection
Failure to comply with the standard protocol; complete
Design Interfaces 23
omission of required functionalities
Complex interaction between FSMs; not considered
Finite State Machines 14
corner-case scenarios

FIFO overflow; dynamic memory corruption due to timing


Data Integrity 7
related allocation and de-allocation problems

Table 3. Bugs Found at the Verification Hot Spots

It is not surprising that most of the bugs were found at the design interfaces. For the bus interfaces,
standard bus and memory interface monitors are already available. Their protocol rules have been
captured as assertions, so the effort to use monitors is minimal. All of the teams employ protocol
monitors to verify the standard interfaces and have incorporated them into their simulation environment.
A few design teams had also leveraged the monitors as interface constraints for formal verification. In
these cases, model checking will only analyze the legal transactions.
We also analyzed how project teams added assertions. Typically, verification engineers added
assertions capturing the test plan criteria. Design engineers added assertions for verifying the
implementation structures.
Besides using assertions with formal verification, we have also seen the “bug triage” time improve
significantly, especially with constrained random simulations. In a constrained random simulation,
once a bug has taken effect, many cycles can pass before the bug propagates and the simulation fails—
if indeed the simulation fails at all. Assertions simplify the identification of the root causes of problems
when they catch bugs at their sources.
Most of the design teams spent significant effort with the resource control logic and data integrity hot
spots. Assertions for resource control logic were easier to capture. Checkers from the assertion library
were used extensively. Since most simulation environments did not stress test the resource control
logic sufficiently, formal verification is ideal for this task.
On the other hand, the assertions required to capture data integrity properties were complex—for example,
when data was repackaged (such as when data goes from a 32-bit external interface to a 128-bit internal
interface) or when packages were dropped intentionally. However, once these complex assertions were in
place, they can be verified with various methodologies.
Although resource control logic and data integrity hot spots can be difficult to verify. They represented an
essential part of the test plans. So, the effort was well spent. Importantly, the bugs found at these hot spots
were both critical and obscure. None would have been found with traditional verification methodologies.

Verification Hot Spots 21


As in the bug examples above, a lot of the bugs were never detected in simulation. Handwritten tests
did not target the bugs, and extensive constrained random simulation never created the right
combination of events in the right order. However, formal verification was able to trigger those corner-
case behaviors. By understanding the design, formal verification was able to find the right
combination of events that lead to the violation of the assertions.

Conclusions
In this paper, we discussed four prominent verification hot spots that can be effectively addressed
using assertion-based verification methods, including a seven-step formal verification planning
process. They represent hard-to-verify structures for which traditional simulation-based verification is
not effective. By concentrating on these hot spots, project teams can benefit greatly from the ABV
methodology. The objective is to find bugs missed by traditional verification methodologies.
We emphasize that these are not the only hot spots for verification in a design. Teams should conduct
internal design audits and identify the relevant hot spots in their designs. Design teams can leverage
the knowledge developed on the four hot spots described herein to tackle the design specific cases of
their projects.
As we have experienced, strategic deployment of the ABV methodology on hot spots has proven
effective. This approach applies scarce verification resources to areas that need them most. It improves
verification efficiency and boosts the overall quality of the design.
References
1. Scott Taylor, Michael Quinn, Darren Brown, Nathan Dohm, Scot Hildebrandt, James Huggins, Carl Ramey,
“Functional Verification of a Multiple-issue, Out-of-Order, Superscalar Alpha Processor”, DEC, Design
Automation Conference 1998.
2. Michael Kantrowitz, Lisa M. Noack, “I’m Done Simulating; Now What? Verification Coverage Analysis
and Correctness Checking of the DECchip 21164 Alpha microprocessor”, DEC, Design Automation
Conference 1996.
3. Carey Kloss, Dean Chao, “Coverage based DV from Testplan to Tape out Using Random Generation and
RTL Assertions”, Cisco Systems, DVCon 2004.
4. Namdo Kim, Byeong Min, “An Efficient Reactive Testbench with Bug Identification Structure”, Samsung
Electronics, DVCon 2004.
5. Dan Joyce, Ray Harlan, Ramon Enriquez, “Audit Your Design to Assess and Even Reduce the Amount of
Random Testing Needed”, HP, DVCon 2003.
6. Frank Dresig, Alexander Krebs, Falk Tischer, “Assertions Enter the Verification Arena”, AMD, Chip
Design Magazine, Dec 2004.
7. Richard Ho, “Maximizing Synergies of Assertions and Coverage Points within a Coverage-Driven
Verification Methodology”, DesignCon 2005.
8. Harry Foster, Adam Krolnik, David Lacey, “Assertion-based Design”, Kluwer Academic Publishers, 2003.
9. Property Specification Language, IEEE Standard for Property Specification Language, IEEE Std 1850-
2005.

22 Verification Hot Spots


10. SystemVerilog, IEEE Standard for SystemVerilog, IEEE Std. 1800-2005.
11. Open Verification Library (OVL 2.0), Accellera Organizaation, July 2007.
12. Questa Verification Library, Checkers Data Book, V6.3, Mentor Graphics 2007.
13. CheckerWare Databook, Assertion Library, V2.5, Mentor Graphics 2007.
14. Curt Widdoes, 0-In Formal Verification Technology Backgrounder, Mentor Graphics White Paper, 2006.
15 Harry Foster, Integrating Formal Verification into a Traditional Flow, Mentor Graphics White Paper, 2006.
16. Chris Salzmann, Ramneek Real, “0-In Assertions –A User’s experience Realizing the Verification
Fantasies”, Mentor Graphics User2User 2006.
17. Jim O’Connor, Roger Sabbagh, “Have I Placed all the Right Assertions in all the Right Places?”, DVCon
2007.
18. Harry Foster, Ping Yeung, “Planning Formal Verification Closure”, DesignCon 2007.
19. Ping Yeung, Vijay Gupta “Five Hot Spots for Assertion-Based Verification”, DVCon 2005.
20. Ping Yeung, “Functional Verification of ARM-based Platform Design with Assertion-Based Verification
Methodology”, ARM Developers Conference, 2004.

Visit the Mentor Graphics web site at www.mentor.com for the latest product information.
© 2007 Mentor Graphics Corporation. All Rights Reserved.
Mentor Graphics is a registered trademark of Mentor Graphics Corporation. All other trademarks are the property of their respective owners.
This document contains information that is proprietary to Mentor Graphics Corporation and may be duplicated in whole or in part by the original recipient for internal business purposed only, provided that
this entire notice appears in all copies. In accepting this document, the recipient agrees to make every reasonable effort to prevent the unauthorized use of this information.

Corporate Headquarters Silicon Valley Europe Pacific Rim Japan


Mentor Graphics Corporation Mentor Graphics Corporation Mentor Graphics Mentor Graphics Taiwan Mentor Graphics Japan Co., Ltd.
8005 S.W. Boeckman Road 1001 Ridder Park Drive Deutschland GmbH Room 1603, 16F, Gotenyama Hills
Wilsonville, Oregon 97070USA San Jose, California 95131 USA Arnulfstrasse 201 International Trade Building 7-35, Kita-Shinagawa 4-chome
Phone: 503-685-7000 Phone: 408-436-1500 80634 Munich No. 333, Section 1, Keelung Road Shinagawa-Ku, Tokyo 140
North American Support Center Fax: 408-436-1501 Germany Taipei, Taiwan, ROC Japan
Phone: 800-547-4303 Phone: +49.89.57096.0 Phone: 886-2-27576020 Phone: 81-3-5488-3030
Fax: 800-684-1795 Fax: +49.89.57096.400 Fax: 886-2-27576027 Fax: 81-3-5488-3031 C&A 10-07 2251:070507

Das könnte Ihnen auch gefallen