Beruflich Dokumente
Kultur Dokumente
Jean-Michel Chabloz
Only the specs-to-high language translation is made mostly by humans If high-level language is correct, the machine language will work
Software flow:
1) 2) 3) 4) 5) 6) 7) Write HL language Translate HL language to machine language Run machine language and find bugs Fix HL language Translate HL language to machine language Run machine language and find bugs Repeat from 4 until finished
Validation vs Verification
Validation:
are we making the right thing, that will answer to the needs of the user? are the specs right?
Verification:
are we making what we wanted to make? is the model equivalent to the specs?
Verification plan
Plan of all what should be verified in a DUT Should include all possible features and potential sources of bugs When all tests in the verification plan have been tested and no bugs were found, then the verification work is over
Verification models
Basic model: give inputs, check outputs black box verification We might use white-box verification (give inputs, check internal signals) Or grey-box verification (give inputs, check some internal signals specifically inserted for debug purpose)
Verification
~70% of effort when developing RTL trend: growing Testbenches are more complex than RTL models Growth of testbench complexity is more than linear with RTL complexity.
example:
10 state machines with 4 states: 4^10 total states 20 state machines with 4 states: 4^20 total states 10 state machines with 8 states: 8^10 total states
Have to test the RTL model under situations similar to those that the manufactured chip will encounter during use (have to develop a model of the universe)
Verification: language must be super-powerful to be able to implement quickly and efficiently the model of the universe, no need to be understood by synthesis tools
ex fifo: virtual storage with push and pop
It would be hard to make a routine to randomly generate one of the legal combinations using only direct randomization of variables
Assertions
Tools for automatic checking of properties automated waveform checker Example:
when req goes at one grant must be 1 between 2 and 3 cycles later req must never be at one for more than two consecutive cycles
Can be used in testbenches or bundled with the RTL to check input correctness
Functional Coverage
Testing if the testbench is good enough Did we do all the tests that we wanted to do based on our verification plan? Example:
A crossbar can put in correspondence all inputs with all outputs Did we try all combination of input/outputs? With functional coverage we can record how many times all combinations of input/outputs were tested, and see the results in a report
Code Coverage
Except functional coverage, there are other coverage metrics that are tool features and can be used independently on the language Do not require to write code to enable coverage Statement coverage: has every statement in the DUT be executed? Path coverage: have all paths been followed? Expression coverage: have all causes for control-flow change been tried? FSM coverage: has every state in an FSM be visited?
Run 1
Run 2
At the end of all the runs, we find out this legal path was not exercised
All statements were executed (100% statement coverage), but not all values for the expressions became true
Directed tests
Testbenches without randomness, targeting a specific item in the verification plan. Example:
write in the fifo for 16 cycles after each other, check that the fifo is full, then read all 16 elements, check that it is empty
If the design is complex enough, it is impossible to cover all features with directed testbenches
Random verification
1) Generate random tests using random constrained stimuli generation 2) Check for bugs and correct them if there are 3) Check for the coverage values. If not satisfying, add constraints and repeat from 1 Note: some directed testbenches might be necessary to cover the corner cases
SystemVerilog
Hardware description and verification language Superset of Verilog all Verilog systems work in SystemVerilog First standardized by IEEE in 2005 IEEE standard 1800-2009 21st february 2013: IEEE standard 1800-2012
Download the standard Holy book of SystemVerilog Answer to all of your questions are inside Good for reference, not for learning
SystemVerilog
RTL subset of the language:
small superset of the Verilog RTL subset some constructs have been inserted to simplify different things
SystemVerilog
SystemVerilog testbenches can also be used to test VHDL/Verilog RTL models mixed-language simulation
Testbenches
Basic model: give inputs, read outputs
Inputs generator RTL model Outputs checker
The element to test is called DUT (Design Under Test) or DUV (Design Under Verification)
Testbenches
Better structure:
Generator of High-level inputs Outputs checker Checks outputs with Expected HL outputs
RTL model
Testbenches
Often:
Generator of High-level inputs Golden model (matlab, TLM, Timed, untimed, ) Outputs checker Checks outputs with Expected HL outputs
RTL model
Golden model and RTL must be developed by different teams, errors might
SoC Verification
Collection of IPs Each IP must first be verified at block-level Then top-level verification follows Verification systems for IPs are packaged into VIPs (verification IPs), with drivers, monitors, assertions to check input correctness, high-level models, etc. A scoreboard keeps track of which tests have been run and coverage Possible to build a chip in which only some components are RTL, the others are golden models Using VIPs it is easy to build fast complex models of what is around a block or a chip
UVM
Universal Verification Methodology Methodology on top of SystemVerilog that automates all this Key focus: reuse Components are enclosed into agents, containing checkers, monitors, drivers, etc. A chip can be built connecting together the different VIPs We do not consider UVM, it is only adapted to complex systems.