Sie sind auf Seite 1von 19

VLSI Testing and Design for Testability 

DFT SCAN INSERTION

Introduction to DFT:

DFT is a technique, which facilitates a design to become testable after production. Its the extra
logic which we put in the normal design, during the design process, which helps its post-production
testing. Post-production testing is necessary because, the process of manufacturing is not 100%
error free. There are defects in silicon which contribute towards the errors introduced in the physical
device. Of course a chip will not work as per the specifications if there are any errors introduced in
the production process. But the question is how to detect that. Since, to run all the functional tests
on each of say a million physical devices produced or manufactured, is very time consuming, there
was a need to device some method, which can make us believe without running full exhaustive tests
on the physical device, that the device has been manufactured correctly. DFT is the answer for that.
It is a technique which only detects that a physical is faulty or is not faulty. After the
post-production test is done on a device, if it is found faulty, trash it, don’t ship to customers, if it is
found to be good, ship it to customers. Since it is a production fault, there is assumed to be no cure.
So it is just a detection, not even a localization of the fault. That is our intended purpose of DFT.
For the end customer, the DFT logic present on the device is a redundant logic.To further justify the
need of DFT logic, consider an example where a company needs to provide 1 Million chips to its
customer. If there isn’t any DFT logic in the chip, and it takes for example, 10 seconds (Its very
kind and liberal to take 10 seconds as an example, in fact it can be much larger than that) to test a
physical device, then it will take approx. three and a half months just to test the devices before
shipping. So the DFT is all about reducing three and a half months to may be three and a half days.
Of course practically many testers will be employed to test the chips in parallel to help reduce the
test time.

Example :. The purpose of this example is to explain what is functional testing, structural testing,
functional simulation, test simulation or fault simulation, fault models, and at what stages of the
design these are associated with?
Problem: Design a multiplexer circuit, with the truth table given in Table 1 below, and make the
device test-able.
VLSI Testing and Design for Testability 

We give all possible input values to i0, i1, i2 and observe the value of Z, if found
according to the expectations, we can say that the design works. A testbench is written for this
purpose. The design is then simulated using this test- bench. This process is called functional
verification.
Before the device is sent to production, we try to think upon what can possibly go wrong dur- ing
the production process? and how we will detect it, after the device is manufactured.May be a short
circuit, may be a open circuit? any thing might be possible, during the fabrication of this
VLSI Testing and Design for Testability 

multiplexer. There is a need to 'model' the possible faults that might occur during the fabri- cation
process. This is called fault modelling. Let us try to device a method which will let us detect, if
anything went wrong with the net say N2 shown in the figure, during production. One may argue
that the same set of input sequence which was used to verify the functionality of the design during
simulation, may be applied to the device after fabrication. Yes, this can be done, Of course the
multiplexer will not work as per the truth table in case any fault is introduced in the fabrication
process, and the fault will be detected. But that is not what we are going to do. We have a different
method to identify the fault. Its a very small design to appreciate the need
of what is explained below, but you will be soon able to appreciate it.
If we will be asked to prove that nothing went wrong to the Net say N2, we must be able to
i), Drive N2 to '0' and then observe the value of N2, that it is indeed '0', which will make sure
that N2 was not accidently shorted to power net and its not stuck at logic '1'. We may also say
that we are trying to evaluate stuck at '1' fault at net N2.
ii). Drive N2 to '1' and then observe the value of N2, that it is indeed '1', which will make sure
that N2 was not accidently shorted to ground net, and is not stuck at logic '0'. We may also say
that we are trying to evaluate stuck at '0' fault at net N2.
Stuck at '0' and stuck at '1' are called fault models. The ability to drive the net to '0' and '1' is
called 'control ability', and the ability to observe the effect of controlling is called 'absorb-
ability'.
Now we will write a new testbench which will be able to do the tasks said above, and re-simu-
late the design using these new testbench.
Let us now try to write such a testbench which will
i). detect a stuck at '1' fault at net say N2. For this
the testbench should put sel = '1', i0 = '0', i1 can be '0' or '1'. This should drive N2 to '0', and Z
will have the value of N2. Since it is just a simulation, Z will indeed sampled as '0'. These set
of input values when applied after production to the device, will be able to detect a stuck at '0'
fault at net N2. The set of input values applied are also called Test-Patterns. Test-Patterns are
generated before the production of the device. These Test-Patterns are used in a pre-production
simulation called fault-simulation. If the observed output at Z fails to match the expected out-
put for a particular set of input value or pattern, we generally say that this pattern has failed.
Note that a test-pattern will never fail during a pre-production simulation, or fault-simulation.
These so called test-patterns will be re-used after the production to run what is called produc-
tion-test.
ii) detect a stuck at '0' fault at net say N2.
We have just tried to detect one of the possible numerous faults that might develop during the
fabrication process. In actual, the Test-Patterns should target every possible fault in the design.
At times it might not be possible to target every possible fault in the design. The ratio of the
faults targeted to the possible number of faults is called fault-coverage.
VLSI Testing and Design for Testability 

Fault Simulation: A design needs to be evaluated for how many faults are ‘detectable’ through
fault modelling? A simulation can be performed so that an deliberate error is intro- duced in the
design, and then the patterns are run to check, if this deliberately introduced error can be detected
by the PFTs. This process is done for each possible fault in the design. This is done to evaluate
what is called ‘fault coverage’ of the design. This is what is termed as ‘fault simulation’. At the
end of a fault simulation we get a percentage ratio of what is called ‘fault- coverage’ which is =
total number of faults detected/total number of faults possible * 100. It is expressed as a
percentage.

Ad Hoc DFT Guidelines


There is definitely no single methodology which solves all VLSI testing problems; there also is no
single DFT technique which is effective for all kinds of circuits. DFT techniques can largely be
divided into two categories, i.e., ad hoc techniques and structured (systematic) techniques. Some
important guidelines are listed below.

1. Partition large circuits into smaller subcircuits to reduce test costs.


One of the most important step in designing a testable chip is to first partition the chip in an
appropriate way such that for each functional module there is an effective (DFT) technique to
test it. Partitioning certainly has to be done at every level of the design process, from
architecture to circuit, whether testing is considered or not. Conventionally, when testing is
not considered, designers partition their objects to ease management, to speed up turn-around
time, to increase performance, and to reduce costs. We stress here that the designers should
also partition the objects to increase their testability.

T1 T2

1 S Mode T1 T2
0M Normal 0 0
C1 C2 Test C1 0 1
Test C2 1 0
S 1
M 0

0 1 1 0
M S S M

Figure 2: Circuit partitioning.

Partitioning can be functional (according to functional module boundaries) or physical (based


on circuit topology). In general, either way is good for testing in most cases. Partitioning can
VLSI Testing and Design for Testability 

be done by using multiplexers (see Fig.2). Maintaining signal integrity is a basic guideline on
partition for testability, which helps localize faults. After partitioning, modules should be
completely testable via their interface.

2. Employ test points to enhance controllability & observability.


Another important DFT technique is test point insertion. Test points include control points
(CPs) and observation points (OPs). The former are active test points, while the latter are
passive ones. There are test points which are both CPs and OPs.

OP
C1 .
.
.
C2 .
.
M .
C3

CP1
CP2
CP3
CP4

Figure3: Test point insertion.

After partitioning, we still need a mechanism of directly accessing the interface between the
modules. Test stimuli and responses of the module under test can be made accessible through
test points. Test point insertion can be done as illustrated in Fig. 3, and can be accessed via
probe pads and extra or shared (multiplexed) input/output pins.
Before exercising test through test points which are not PIs and POs, we should investigate
into additional requirements on the test points raised by the use of test equipments.

3. Design circuits to be easily initializable.


This increases predictability. A power-on reset mechanism is the most effective and widely
used approach. A reset pin for all or some modules also is important. Synchronizing or
homing sequences for small finite state machines may be used where appropriate (a famous
example is the JTAG TAP controller--see the chapter on Boundary Scan).

4. Disable internal one-shots (monostables) during test.


This is due to the difficulty for the tester to remain synchronized with the DUT. A monostable
(one-shot) multivibrator produces a pulse of constant duration in response to the rising or
falling transition of the trigger input. It has only one stable state. Its pulse duration is usually
controlled externally by a resistor and a capacitor (with current technology, they also can be
integrated on chip).
VLSI Testing and Design for Testability 

One-shots are used mainly for 1) pulse shaping, 2) switch-on delays, 3) switch-off delays, 4)
signal delays. Since it is not controlled by clocks, synchronization and precise duration
control are very difficult, which in turn reduces testability by ATE. Counters and dividers are
better candidates for delay control.

5. Disable internal oscillators and clocks during test.


To guarantee tester synchronization, internal oscillator and clock generator circuitry should be
isolated during the test of the functional circuitry. Of course the internal oscillators and clocks
should also be tested separately.

6. Provide logic to break global feedback loops.


Circuits with feedback loops are sequential ones. Effective sequential ATPGs are yet to be
developed, while combinational ones are relatively mature now. Breaking the feedback loops
turn the sequential testing problem into a combinational one, which greatly reduces the testing
effort needed in general. Specifically, test generation becomes feasible, and fault localization
becomes much easier. Breaking global feedback loops are especially effective, since
otherwise we are facing the problem of testing a large sequential circuit (such as a CPU),
which can frequently be shown to be very hard or even impossible.
Scan techniques and/or multiplexers which are used to partition a circuit can also be used to
break the feedback loops. The feedback path can be considered as both the CP and the OP.

7. Partition large counters into smaller ones.


Sequential modules with long cycles such as large counters, dividers, serial comparators, and
serial parity checkers require very long test sequences. For example, a 32-bit counter requires
232 clock cycles for a full state coverage, which means a test time of more than one hour only
for the counter, if a 10 MHz clock is used. Test points should be used to partition these
circuits into smaller ones and test them separately.

8. Avoid the use of redundant logic.


This has been discussed in the chapters on combinational and sequential ATPG.

9. Keep analog and digital circuits physically apart.


Mixed analog and digital circuits in a single chip is gaining attraction as VLSI technologies
keep moving forward. Analog circuit testing, however, is very much different from digital
circuit testing. In fact, what we mean by testing for analog circuits is really measurement,
since analog signals are continuous (as opposed to discrete or logic signals in digital circuits).
They require different test equipments and different test methodologies, therefore they should
be tested separately.
To avoid interference or noise penetration, designers know that they should physically isolate
the analog circuit layout from the digital one which resides on the same chip, with signals
VLSI Testing and Design for Testability 

communicated via AD converters and/or DA converters. For testing purpose, we require more.
These communicating wires between the analog and digital modules should become the test
points, i.e., we should be able to test the analog and digital parts independently.

10. Avoid the use of asynchronous logic.


Asynchronous circuits are sequential ones which are not clocked. Timing is determined by
gate and wire delays. They usually are less expensive and faster than their synchronous
counterpart, so some experienced designers like to use them. Their design verification and
testing, however, are much harder than synchronous circuits. Since no clocking is employed,
timing is continuous instead of discrete, which makes tester synchronization virtually
impossible, and therefore only functional test by application board can be used. In almost all
cases, high fault coverage cannot be guaranteed within a reasonable test time.

11. Avoid diagnostic ambiguity groups such as wired-OR/wired-AND junctions and


high-fanout nodes.
Apart from performance reasons, wired-OR/wired-AND junctions and high-fanout nodes are
hard to test (they are part of the reasons why ATPGs are so inefficient), so they should be
avoided.

12. Consider tester requirements.


Tester requirements such as pin limitation, tristating, timing resolution, speed, memory depth,
driving capability, analog/mixed-signal support, internal/boundary scan support, etc., should
be considered during the design process to avoid delay of the project and unnecessary
investment on the equipments.
The above guidelines are from experienced practitioners. They are not meant to be complete or
universal. In fact, there are drawbacks for these techniques:

• high fault coverage cannot be guaranteed;

• manual test generation is still required;


• design iterations are likely to increase.

Scan Design Approaches


Although we have not formally presented the scan techniques, their purpose and importance have
been discussed in the previous sections, namely,

• they are effective for circuit partitioning;


• they provide controllability and observability of internal state variables for testing;

• they turn the sequential test problem into a combinational one.


VLSI Testing and Design for Testability 

There are four major scan approaches that we will discuss in this section, i.e.,

1.MUXed Scan
2. Scan path
3. LSSD
4. Random access

1. MUXed Scan
This approach is also called the MUX Scan Approach, in which a MUX is inserted in front of every
FF to be placed in the scan chain. It was invented at Stanford in 1973 by M. Williams & Angell, and
later adopted by IBM--heavily used in IBM products.

X Combinational Z
Logic

y Y
State Vector

Figure 4: A finite state machine model for sequential circuits.

A popular finite state machine (FSM) model for sequential circuits is shown in Fig. 4, in which X is
the PI vector, Z the PO vector, Y the excitation (next state) vector, and y the present state vector.
The excitation vector is also called the pseudo primary output (PPO) vector, and the present state
vector is also called the pseudo primary input (PPI) vector.

To make elements of the state vector controllable and observable, we add the following items to the
original FSM (see Fig.5):

DI
L1 L2
D Q D Q

SI
T
C
VLSI Testing and Design for Testability 

C/L
X Z

SI M FF M FF M FF SO
C
T

Figure 5: The Shift-Register Modification approach.

1. a TEST mode pin (T);


2. a SCAN-IN pin (SI);
3. a SCAN-OUT pin (SO);
4. a MUX (switch) in front of each FF (M)

When the test mode pin T=0, the circuit is in normal operation mode; when T=1, it is in test mode
(or shift-register mode). This is clearly shown in Fig.5.
The test procedure using this method is as follows:
(i). Switch to the shift-register (SR) mode (T=1) and check the SR operation by shifting in an
alternating sequence of 1s and 0s, e.g., 00110 (a simple functional test).
(ii). Initialize the SR-load the first pattern from SI
(iii). Return to the normal mode (T=0), apply the test pattern, and capture the response.

(iv).Switch to the SR mode and shift out the final state from SO while setting the starting state
for the next test. Go to e if there is a test pattern to apply.
This approach effectively turns the sequential testing problem into a combinational one, i.e., the
DUT becomes the combinational logic which usually can be fully tested by compact ATPG patterns.
Unfortunately, there are two types of overheads associated with this technique which the designers
care about very much: the hardware overhead (including three extra pins, multiplexers for all FFs,
and extra routing area) and performance overhead (including multiplexer delay and FF delay due to
extra load).

Since test mode and normal mode are exclusive of each other, in test mode the SI pin may be a
redefined input pin, and the SO pin may be a redefined output pin. The redefinition of the pins can
be done by a multiplexer controlled by T. This arrangement is good for a pin-limited design, i.e.,
one whose die size is entirely determined by the pad frame. The actual hardware overhead varies
from circuit to circuit, depending on the percentage of area occupied by the FFs and the routing
condition.
VLSI Testing and Design for Testability 

2 Scan Path
This approach is also called the Clock Scan Approach, in which the multiplexing function is
implemented by two separate ports controlled by two different clocks instead of a MUX. It was
invented by Kobayashi et al. in 1968, and reported by Funatsu et al. in 1975, and adopted by NEC.
It uses two-port raceless D-FFs: each FF consists of two latches operating in a master-slave fashion,
and has two clocks (C1 and C2) to control the scan input (SI) and the normal data input (DI)
separately. The logic diagram of the two-port raceless D-FF is shown in Fig.6 .

The test procedure of the Clock Scan Approach is the same as the MUX Scan Approach. The
difference is in the scan cell design and control. The MUX has disappeared from the scan cell, and
the FF is redesigned to incorporate the multiplexing function into the register cell. The resulting
two-port raceless D-FF is controlled in the following way:
• Normal mode: C2 = 1 to block SI; C1 = 0 →1 to load DI.
• SR (test) mode: C1 = 1 to block DI; C2 = 0 →1 to load SI.

C2
SI

DI
DO
SO

C1 L2
L1

2-port raceless master-slave D FF

Figure 6: Logic diagram of the two-port raceless D-FF.

This approach is said to achieve a lower hardware overhead (due to dense layout) and less
performance penalty (due to the removal of the MUX in front of the FF) compared to the MUX
Scan Approach. The real figures however depend on the circuit style and technology selected, and
on the physical implementation.
VLSI Testing and Design for Testability 

3 Level-Sensitive Scan Design (LSSD)


This approach was introduced by Eichelberger and T. Williams in 1977 and 1978. It is a latch-based
design used at IBM, which guarantees race- and hazard-free system operation as well as testing, i.e.,
it is insensitive to component timing variations such as rise time, fall time, and delay. It also is
claimed to be faster and have a lower hardware complexity than SR modification. It uses two
latches (one for normal operation and one for scan) and three clocks. Furthermore, to enjoy the
luxury of race- and hazard-free system operation and test, the designer has to follow a set of
complicated design rules (to be discussed later), which kill nine designers out of ten.

Definition 3
A logic circuit is level sensitive (LS) iff the steady state response to any allowed input change is
independent of the delays within the circuit. Also, the response is independent of the order in which
the inputs change

D
C
C D +L

D 0 0 L
+L 0 1 L
L L
1 0 0
C
1 1 1

Figure 7: A polarity-hold latch.

DI
+L1
DI
C L1 +L1
SI
A
C
SI
L2 +L2
+L2
B

Figure 8: The polarity-hold shift-register latch (SRL).

LSSD requires that the circuit be LS, so we need LS memory elements as defined above. Fig. 7
shows an LS polarity-hold latch. The correct change of the latch output (L) is not dependent on the
rise/fall time of C, but only on C being `1' for a period of time greater than or equal to data
VLSI Testing and Design for Testability 

propagation and stabilization time. Fig. 8 shows the polarity-hold shift-register latch (SRL) used in
LSSD as the scan cell.
The scan cell is controlled in the following way:

• Normal mode: A=B=0, C=0 → 1.


• SR (test) mode: C=0, AB=10→ 01 to shift SI through L1 and L2.
The SRL has to be polarity-hold, hazard-free, and level-sensitive. To be race-free, clocks C and B as
well as A and B must be nonoverlapping. This design (similar to Scan Path) avoids performance
degradation introduced by the MUX in shift-register modification. If pin count is a concern, we can
replace B with A + C , i.e., NOR(A,C).

C/L C/L
Z X Z
X
C
C A
A L1 L1 L1 SO
L1 SI L2
SI L1 L1 SO L2 L2
L2 L2 L2
B
B

Double-latch LSSD Single-latch LSSD

Figure 9: LSSD structures.

LSSD design rules are summarized as follows:


c Internal storage elements must be polarity-hold latches.

d Latches can be controlled by 2 or more nonoverlapping clocks that satisfy:


n A latch X may feed the data port of another latch Y iff the clock that sets the data into Y
does not clock X.

oA latch X may gate a clock C to produce a gated clock Cg, which drives another latch Y iff
Cg, or any other clock C’g produced from Cg, does not clock X.
e There must exist a set of clock primary inputs from which the clock inputs to all SRLs are
controlled ither through (1) single-clock distribution tree or (2) logic that is gated by SRLs
and/or nonclock primary inputs. In addition, the following conditions must hold:
n All clock inputs to SRLs must be OFF when clock PIs are OFF.
o Any SRL clock input must be controlled from one or more clock PIs.

p No clock can be ANDed with either the true or the complement of another clock.
f Clock PIs cannot feed the data inputs to latches, either directly or through combinational
logic.
VLSI Testing and Design for Testability 

g Every system latch must be part of an SRL; each SRL must be part of some scan chain.

h A scan state exists under the following conditions:


n Each SRL or scan-out PO is a function of only the preceding SRL or scan-in PI in its scan
chain during the scan operation.

o All clocks except the shift clocks are disabled at the SRL inputs.
p Any shift clock to an SRL can be truned ON or OFF by changing the corresponding clock
PI.

• A network that satisfies rules c-f is level-sensitive.

• Race-free operation is guaranteed by rules d.n&f.


• Rule e allows a tester to turn off system clocks and use the shift clocks to force data into and
out of the scan chain.

• Rules g&h are used to support scan.


The advantages associated with LSSD are:

1. Correct operation independent of AC characteristics is guaranteed.


2. FSM is reduced to combinational logic as far as testing is concerned.
3. Hazards and races are eliminated, which simplifies test generation and fault simulation.

There however are problems with LSSD (or previously discussed scan approaches):
1. Complex design rules are imposed on designers--no freedom to vary from the overall schemes,
and higher design and hardware costs (4-20% more hardware and 4 extra pins).
2. No asynchronous designs are allowed.

3. Sequential routing of latches can introduce irregular structures.


4. Faults changing combinational function to sequential one may cause trouble, e.g., bridging and
CMOS stuck-open faults.

5. Function to be tested has been changed into a quite different combinational one, so
specification language will not be of any help.
6. Test application becomes a slow process, and normal-speed testing of the entire test sequence
is impossible.
7. It is not good for memory intensive designs.
VLSI Testing and Design for Testability 

4 Random Access
This approach uses addressable latches whose addressing scheme is similar to high-density memory
addressing, i.e., an address decoder is needed. It provides random access to FFs via
multiplexing--address selection. The approach was developed by Fujitsu [Ando, 1980], and was
used by Fujitsu, Amdahl, and TI. Its overall structure is shown in Fig. 10.

C/L
X Z
C
SI
L1 L1 L1 L1

addr
decoder

Figure 10: The Random Access structure and its cell design.
The difference between this approach and the previous ones is that the state vector can now be
accessed in a random sequence. Since neighboring patterns can be arranged so that they differ in
only a few bits, and only a few response bits need to be observed, the test application time can be
reduced. Also, it has minimal impact on the normal paths, so the performance penalty is minimized.
Another advantage of this approach is that it provides the ability to `watch' a node in normal

DI
CKI

SI
+L

CK2
Addr
SO

(C = CK1 & CK2)


operation mode, which is impossible with previous scan methods. The major disadvantage of the
approach is that it needs an address decoder, thus the hardware overhead (chip area and pin count)
is high.
As a summary, for all scan techniques, 1) test patterns still need to be computed by ATPG; 2) test
patterns must be stored externally, and responses must be stored and evaluated, so large
(non-portable) test fixture are still required. Therefore, there is a growing interest in built-in self-test
(BIST).
VLSI Testing and Design for Testability 

Scan Chain Operation for Stuckat Test:

Here is an example design under test (DUT). I have shown a single scan chain (in red color) in the
circuit, with Scan In and Scan Out ports. Assume that all scan flip‐flops are controlled by the Scan
Enable signal

The first thing we should do is to put the scan flip‐flops into scan mode. We do this by using the
Scan Enable signal. In this case, forcing Scan Enable to 1 enables the scan mode.
Note that initially all the scan flops at unknown state (X). For industrial circuits, there are
architectural ways to initialize all flip‐flops to known states if needed. However, for this particular
case, assume that all scan flops were initially at unknown state X.
We want to scan in the following vector: 100101011
VLSI Testing and Design for Testability 

And we start scanning in the test vector we want to apply. In the figure above, you see that the first
3 bits are scanned in. We shift in a single bit at each clock cycle. Usually, the scan shift frequency is
very slow, much lower than the functional frequency of the circuit. This frequency is currently
about 100MHz for most ASIC circuits. AMD uses 400MHz shift frequency, which is a pretty high
value for that purpose. Of course, the higher the test frequency, the shorter the test time

At this point, we have shifted in the complete test vector '100101011'. We are done with shifting in.
We will disable scan mode by forcing Scan Enable to 0.
Note that the shifted‐in test vector is currently applied to the combinational logic pieces that are
driven by scan flip‐flops. It means that 2nd, 3rd, and 4th combinational logic blocks are already
forced test inputs.
VLSI Testing and Design for Testability 

The next step is to force primary input (PI) values and measure the primary output (PO) values:
force_PI and measure_PO.
Note that from the previous step, the shifted‐in test vector was already applied to the combinational
logic pieces that are driven by scan flip‐flops. It means that 2nd, 3rd, and 4th combinational logic
blocks were already forced test inputs. Now, these combinational logic blocks have generated their
outputs.
Since we forced values to PI, the 1st combinational block also has its outputs ready. Furthermore,
the outputs of the 4th combinational block can now be observed from POs. We will get the output
values of combinational block 4 by measuring POs.
For the rest of the combinational blocks (1,2, and 3), we need to push the output values into scan
flip‐flops and then shift these values out.

In order to push the output values of combinational blocks 1,2, and 3 into scan flip‐flops, we have
to toggle the system clock. Once we toggle the system clock, all D flip‐flops (scan flip‐flops) will
capture the values at their D input.
In the figure above, the capture event is shown
VLSI Testing and Design for Testability 

Now, we are ready to shift‐out the captured combinational logic responses. However, while doing
that, we will also shift‐in the next test vector. The next test vector is '111100111'.
Note that we have set Scan Enable signal back to 1 to enable shifting.

Here is a snapshot of the shift operation. As you can see, we have shifted‐out 4‐bits of the previous
test response, and at the same time shifted‐in 4‐bits of the new test vector input. The new test vector
bits are shown in bold‐red in the figure above.
VLSI Testing and Design for Testability 

At this point, we have completely scanned‐out (shifted‐out) the test response for the previous test
vector, and also scanned‐in (shifted‐in) the new test vector input.
The process continues in this way until all the test vectors are applied.
Note: On page 5, I mentioned force_PI and measure_PO. Actually, for industrial circuits, force_PI
and measure_PO is not done. This is because primary inputs and outputs are connected to very slow
pads, and these pads are not tested by structural test. You may realize that in this case the 1st and
4th combinational blocks cannot be tested: 1st block cannot be tested because we cannot apply
inputs to it (force_PI). 4th block cannot be tested because we cannot check its output
(measure_PO). This is usually not a problem because the circuits are surrounded by wrapper scan
flip‐flops. This means that there is actually no logic before the first level of scan flip‐flops or after
the last level of scan flip‐flops. So, the complete DUT is covered by scan flip‐flops.

Das könnte Ihnen auch gefallen