Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Microcontroller test Pattern Generator:

Status
Not open for further replies.

assumeas

Newbie level 3
Joined
Sep 15, 2005
Messages
4
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,281
Activity points
1,641
deterministic cbist

Introduction
Very Large Scale Integration (VLSI) has had a dramatic impact on the growth of digital technology. VLSI has not only reduced the size and the cost but also increased the complexity of the circuits. This has brought significant improvements in performance. These welcomed improvements have resulted in significant performance/cost advantages in VLSI-implemented systems. There are, however, potential problems which may retard the effective use and growth of future VLSI technology. Among these is the problem of circuit testing, which becomes increasingly difficult as the scale of integration grows.
Because of the high device counts and limited input/output access that characterize VLSI circuits, conventional testing approaches are often ineffective and insufficient for VLSI circuits. Automatic test pattern generation for sequential circuits is not feasible even for many LSI circuits. Thus, design for testability techniques such as serial scan must be employed, as stated in the previous chapter. But for VLSI circuits, such techniques still involve large amounts of test pattern generation and simulation efforts, huge volumes of test input/output data, and excessive testing times. Therefore, alternatives to test methodologies which employ test pattern generation and externally applied test patterns are essential to continue the growth of VLSI industry.
For any such alternative, the following goals are desirable: high and easily verifiable fault coverage, minimum test pattern generation, minimum performance degradation, at-speed testing, short testing time, and reasonable hardware overhead. Built-In Self-Test (BIST) provides a feasible solution to the above demands. First, BIST significantly reduces off-chip communication to overcome the bottleneck caused by the limited input/output access. Further, it eliminates much of the test pattern generation and simulation process. Testing time can be shorten by testing multiple units simultaneously through test scheduling. Hardware overhead can be minimized by careful design and through the sharing of test hardware.

Added after 1 minutes:

The VLSI Testing Problem
VLSI circuits are characterized by high device counts, limited input/output (I/O) access, and sequential behavior. These characteristics are responsible for the difficulties in testing such circuits. The high device count increases the complexity of test generation and fault simulation. The limited I/O access greatly decreases the controllability and observability of the internal circuitry. The sequential behavior implies performing sequential test pattern generation. The automation of such sequential test pattern generation is a major unsolved problem in the testing area.
High device count is the most prominent feature of VLSI. Typically, a VLSI chip contains hundreds of thousand devices. With deep submicron technologies, the device count is pushed well over one million mark. This high device count has an immediate impact on test pattern generation and fault simulation. Even for the much simpler combinational circuit, it has been observed that the computer run time to do test generation and fault simulation is approximately proportional to the number of logic gates to the power of three [Will82]. The high device count also has impacts on test pattern storage and on testing time. A reasonable assumption is that both the number of test vectors and the width of a vector is linearly proportional to the circuit size. Hence, testing time and test pattern storage are proportional to the circuit size to the power of two.

Figure 7.1 Gate/Pin ratio in the development of IC technologies
Limited I/O access, although perhaps not as significant as high device count, still contributes to testing problems. The consequence of limited I/O access is low testability in terms of both controllability and observability. The testability of a chip can be roughly estimated by its gate-to-pin ratio, i.e., the ratio between the number of interface pins and the number of gates. Thus, production of tests for VLSI circuits is likely to be difficult due to poor testability. Figure 6.1 shows the device counts, pin counts, and gate-to-pin ratios in the development of IC technologies. The higher the ratio is the lower the testability will be.
Built-in self-test (BIST) significantly reduces off-chip communication by accommodating test generation and response evaluation hardware on the chip. Therefore, the limited I/O access constraint is eased. Well-organized BIST also partitions the circuit into pieces of moderate size to reduce the complexity of test generation and fault simulation. In fact, many built-in self-test approaches avoid either test pattern generation, fault simulation, or both. It is also easier to schedule simultaneous testing of multiple blocks by using BIST rather than off-chip testing, thus providing potential for reducing the testing time.

Added after 3 minutes:


Figure 7.13 Cone Segmentation for pseudoexhaustive testing
To generate pseudoexhaustive test for the circuit in Figure 7.13, we can use a LFSR and a shift register as shown in Figure 7.14 [Barzilai 1983]. The length of LFSR is usually greater than the size of the largest cone. Usually, at least two seeds are required. The number of test patterns generated is near minimal when the size of the cone is much smaller than the total number of inputs. Such a structure has the minimal hardware overhead. It is also compatible with the DFT structure. If LFSR has the shift mode. The seeds can be sifted in through scan chain. Moreover, the test responses of other module can be shifted in for compression. A simple way to determine the length of the LFSR is by examining span of the cones. The length of the LFSR is equal to the largest span, assume. As a result, all the cones with span less than K will have exhaustive patterns if patterns are applied.

Figure 7.14 LFSR+SR for pseudoexhaustive testing
Another approach for pseudoexhaustive test is to partition the circuit via the use of multiplexer as shown in Figure 7.15. In normal mode, the subcircuit under test accepts the normal input data. While in the BIST mode, the pattern generated by the LFSR is delivered to the subcircuit via the multiplexer. The responses are compressed by a signature analyzer. In the next section, we will have detail discuss on signature analyzers. With such a design, the test length is minimized. The drawback is the hardware overhead incurred by the multiplexers and the routing area for the wire to deliver test patterns.

Figure 7.15 Pseudoexhaustive via multiplexer partitioning.
Pseudorandom Testing
Pseudorandom testing applies a certain amount of random test patterns. The test patterns being applied satisfy the randomness properties. The sequence being applied is in a deterministic order. The fault coverage is determined by the test length and the contents of the patterns. For random patterns, the fault coverage v.s. test length has the typical exponential curve shown in Figure 7.16. As one know, the longer the test length is, the higher the fault coverage will be. Theoretically, it takes infinite time to reach 100% fault coverage. A more precise analysis has been done by Savir and Bardell in [Savir, Bardell 1994]. There, the test length is determined by the following equations.
(7-4)
and are upper and lower bounds of the test length. is the escape probability threshold. It corresponds to the confidence level is at least . p is the detection probability of all faults. k is the number of hard to detect faults. For example, for p of , of 0.001, and k of 10, the test length is in between (920980, 921030). If k is 50, the test length becomes in between (1081923, 1091973). Other than test length, there are random pattern resistant faults that is difficult to test by a random pattern. For example, the stuck-at-0 fault of the adder-tree shown in Figure 7.17 quires the pattern (111…1) for the fault detection. Therefore, it is not likely to be detected by random patterns. With random pattern resistant faults, we need some modification to improve the detection probability. The test pattern generation for pseudorandom testing is the simplest. Either circuits in Figure 7.12 and 7.14 is able to generate the desired patterns. [Savir 1984], [Williams 1985], [Wagner 1987].

Figure 7.16 Fault coverage v.s. test length for pseudorandom testing.

Figure 7.17 Example of random pattern resistant faults
7.3.5 Weighted Pseudorandom Testing
Weighted pseudorandom testing applied pseudorandom patterns with certain 0s and 1s distribution to deal with the random pattern resistant faults. It is a hybrid technique between pseudorandom testing and store-pattern approach. In weighted pseudorandom testing, the weight must be selected such that the test patterns for hard-to-detect faults are more likely to occur. On can use software to determine a single or multiple weights based on the probability analysis of hard-to-detect faults. For instance, if the weight of the seudorandom pattern for the s-a-0 fault in Figure 7.13 is chosen as 0.9, the desired pattern of (111…1) is more likely to happen [Schnurmann 1975], [Chin 1984], and [Wunderlich 1987].
The test pattern generator for weighted pseudorandom testing can be accomplished in two ways. First, it can be produced by a LFSR and some logic gates as shown in Figure 7.18(a). As we know, LFSR generates pattern with equal probability of 1s and 0s. If a 3-input AND gate is used, the probability of 1s becomes 0.125. If a 2-input Or gate is used, the probability becomes 0.75. Second, one can use cellura automata to produce patterns of desired weights as shown in Figure 7.18(b). For cellura automata, the selection and arrangement of the next state function, Fca, will produce patterns of different weights.

Figure 7.18 The generation of weighted pseudorandom patterns.
Test Strategies Comparison
As mentioned earlier, the considerations for deploying a BIST methodology are fault coverage, hardware overhead, test time overhead, and design effort. These four considerations have very complicated relationship. For instance, exhaustive test has the highest fault coverage however the test time can be very long. Pseudoexhaustive test has good compromise between test time and test hardware overhead. However, the design effort can be significant. Table 7.1 lists the characteristics of the test strategies mentioned earlier. In terms of fault coverage, exhaustive test and pseudoexhaustive test has the highest coverage. In terms of hardware overhead, the pseudorandom testing is the lowest. For the test time, stored pattern approach has the shortest test time. While pseudoexhaustive test requires a signification amount of design effort.
Table 7.1 Comparison of different test strategies.

BIST Response Compression and Analysis
The response analyzer compresses a very long test responses into a single word. Such a word is called a signature. The signature is then compared with the prestored golden signature obtained from the fault-free responses using the same compression mechanism. If the signature is the same as the golden copy, the CUT is regarded fault-free. Otherwise, it is faulty. In this section, we will study the following response analysis methods, ones count, transition count, syndrome count, and signature analysis. As mentioned earlier, there is a store-pattern approach which stores test patterns and responses in advance. The response analysis is done by one-to-one comparison of the prestored fault-free responses. Since the method is very straightforward, we will not discuss it further.
Compression is like a function which map a large input space (the response) into a small output space (signature). It is an n-to-1 mapping. Therefore, a faulty response may have the same signature as the fault-free one. Such a situation is referred as the aliasing. The aliasing probability is the possibility that a faulty response is treated as fault-free. It is defined as follows.
(7-5)
The aliasing probability is the major considerations in response analysis. Due to the n-to-1 mapping property of the compression, it is unlikely to do diagnosis after compression. Therefore, the diagnosis resoluation is very poor after compression. In addition to the aliasing probability, hardware overhead and hardware compatibility are also important issues. Here, hardware compatibility is referred to how well the BIST hardware can be incorporated in the CUT or DFT.
Ones Count
Ones count counts the number of ones in the output sequence. Hence, the signature is the number of ones. It is a intuitive method to compress a long output sequence into a single word. Figure 7.19 shows test structure of ones count for a single output CUT. The pattern generator can be any one of the technique in Section 7.3. Figure 7.19 shows the structure for a single-output CUT. For multiple output ones, one can use a counter for each output or do one output at a time with the same input sequence. The aliasing probability is derived as follows. Let m be the test length, r the number of ones. The aliasing probability is shown as follows.
(7-6)
Here, the denumerator is the total number of faulty output sequences. Note that is total number of output sequences and only one of them is fault-free. The numerator is the total number of sequences that has r ones , the same as the fault-free sequence. From the above equation, we know that, when r equals one half of m, the aliasing probability is the largest. When r=0 or r=m, the aliasing probability is 0. From the compression method, we know that, the input test sequence can be permuted without changing the count.

Figure 7.19 Ones count compression circuit structure

Transition Count
Transition count compression is very similar to ones count compression. Instead counting the number of ones, it counts the number of transitions, zero to one and/or one to zero. Figure 7.20 shows the circuit structure for the transition count. The aliasing probability of the transition count compression is shown as follows.
(7-7)
Similarly, the denumerator is the total number of output sequence when the test length is m. The numerator is the number faulty sequences that has r transitions. Note that, for the test length of m, there are m-1transitions. Hence, is the number of sequences that has r transitions. Since the first output can be either one or zero, therefore, the total number must be multiplied by 2. Again, only one of them is fault-free.
Same as the ones count, r=m/2 has the highest aliasing probability. However, when r=0 or r=m-1 the aliasing probability is not zero It is which is also very close to zero. Different from ones count, the input sequence cannot be permuted. If permuted, the number of transitions will be changed as well. On the other hand, one can reorder the test sequence to maximize or minimize the transitions, hence, minimize the aliasing probability. Note that, if all the test patterns with output 0s are applied before those with output 1, the number of transition is only 1. As a result, the aliasing probability is almost zero and the hardware overhead is also minimized. Here, only one-bit counter is required.

Figure 7.20 Transition count compression circuit structure
Syndrome Testing
Syndrome is defined as the probability of ones of the output sequence. The syndrome is 1/8 for a 3-input AND gate and 7/8 for a 3-input OR gate if the inputs has equal probability of ones and zeros. Figure 7.21 shows a BIST circuit structure for the syndrome count. It is very similar to ones count and transition count. The differences is that the final count is divided by the number of patterns being applied. The most distinguished feature of syndrome testing is that the syndrome is independent of the implementation. It is solely determined by its function of the circuit.

Figure 7.21 Syndrome testing circuit structure
The originally design of syndrome test applies exhaustive patterns. Hence, the syndrome is , where n is the number of inputs and K is the number of minterms. A circuit is syndrome testable if all single stuck-at faults are syndrome detectable. The interesting part of syndrome testing is that any function can be designed as being syndrome testable. There are many researches on syndrome testing, please refer to [Savir 1980] and [Barzilai 1981] for further detail.
Signature Analysis
Signature analysis is a compression technique based on the LFSR discussed in the previous section. The circuit structure for the signature analysis is shown in Figure 7.22. Mathematically, the output sequence (polynomial) is divided by the characteristic polynomial. The remainder of the division is called the signature. The example shown in Figure 7.9 can also be regarded as an example of signature analysis. The input sequence (110110110) is compressed into a signature of (1101), the remainder. For an output sequence of length m, there are a total of faulty sequence. Suppose that we represent the input sequence P(x) as
P(x)=Q(X)G(x)+R(x) (7-8)
G(x) is the characteristic polynomial; Q(x) is the quotient; and R(x) is the remainder or signature. For those aliasing faulty sequence, the remainder R(x) will be the same as the fault-free one. Since, P(x) is of order m and G(x) is of order n, hence Q(x) has an order of m-n. Hence, there are possible Q(x) or P(x). One of them is fault-free. Therefore, the aliasing probability is shown as follows.
(7-9)

Figure 7.22 Signature analysis circuit structure

Figure 7.23 MISR - Multiple-input signature register
Different from previous methods, the aliasing probability of signature analysis is independent of the test responses. The aliasing probability can be reduced by increase the length of LFSR. According to the characteristics of polynomial field, signature analysis by LFSR has the following properties. First, An LFSR with two or more nonzero terms detects any single fault. Second, a LFSR with primitive characteristic polynomial detects any double faults separated by less than positions. Third, a LFSR with detects all burst error of length less than n. Figure 7.22 shows the hardware structure for a single-output LFSR. For multiple output circuits, one need not use multiple LFSRs or compress the responses of an output one at a time. Instead, there is a multiple-input signature register or MISR. Figure 7.23 shows the circuit structure of two MISRs based on the LFSR in Figure 7.3. The multiple input bits are from the top of the MISRs. MISR share the same properties as LFSR for single-input signature analysis.
7.4.5 Space Compression
So far, we have presented many techniques to compress a long test sequence into a single-word signature for verification. This can be regarded as the compression in time domain. Here, we would like to discuss the space compression. Space compression is a technique to handle circuits with a lot of outputs. With a lot of outputs, take signature analysis using MISR as an example, the length of the MISR will be very long. As a result, the hardware overhead can be excessive.
One can use XOR gates to combine two or more output pins into a single output before the time compression. To minimize the aliasing probability, error control coding techniques can be used. Figure 7.24 shows the space compression using a 16-bit SEC-DEC (single error correction and double error detection) code. Here, 16 outputs are compressed into only 5 outputs. In conjunction with time compression, the architecture is shown in Figure 7.25. Here, TC (time compression) can be though of as a LFSR or MISR.

Figure 7.24 Space compression using 16 SEC-DEC code

Figure 7.25 Space and time compression architecture.
BIST Architecture
After describing the BIST fundamentals, in this section, se will focus on the BIST architecture. Since, LFSR and MISR are compatible with scan DFT and are overwhelming more popular than any other BIST modules, we will concentrate on the techniques based on LFSR, MISR, and scan registers. Most BIST teniques involve a fundamental trade off between testing time and test hardware overhead. In [Argrawal 1993], BIST techniques are classified into two categories: test-per-clock and test-per-scan. Test-per-clock BIST applies test vectors and captures test responses onces every clock cycle. Test-per-scan BIST use scan chains to delivery test vectors and test responses, therefore, a complete test cycle has the same period as a complete scan cycle. In the following subsections, we will discuss the sequential and combination BIST techniques in the categories of test-per-clock and test-per-scan.
Combinational Test-Per-Clock BIST
Basic Structure
Figure 7.12 shows a basic structures of test-per-clock BIST. For every test clock, LFSR generates a test vector and SA (MISR) compresses a response vector. Such a structure is the most versatile. By this we mean it can be used for exhaustive test, pseudoexhaustive test, pseudorandom testing, and weight pseudorandom testing. For the last one, the structure of the LFSR must be replaced by the hardware structure shown in Section 7.3.5. For this approach, the length of the LFSR and MISR must be the same as the number of the inputs and outputs of the CUT. Hence, the hardware overhead can be execessive. The techniques use such a basic approaches include centralized and separate board-level (CSBL) BIST in [Benowitz 1975] and built-in evaluation and self-test (BEST) in [Resnick 1983]. The architecture of both methods are shown in Figure 7.26 and Figure 7.27 respectively. Note that both CSBL BIST and BEST are proposed for combinational as well as sequential circuits.

Figure 7.26 CSBL BIST architecture

Figure 7.27 BEST BIST architecture
CBIST
Concurrent BIST (CBIST) shown in Figure 7.28 is another example of test-per-clock approach [Saluja 1988]. For the concurrent part, the comparator monitor the normal operation data. If it is the same as the pattern in the LFSR, the test clock is ticked. The response is fed to the MISR for the compression and LFSR advances one clock cycle. If there is no match for a long time, the LFSR generate test clock is ticked once automatically to advance one test cycle. At the same time, the system clock is hold for one cycle.

Figure 7.28 CBIST architecture
LFSR+SR
Figure 7.29 shows the architecture which uses LFSR and scan register. Every time the LFSR shifts out one bit to the scan register, a test pattern is applied and a test response is compressed. With such a structure, we are able to minimize the hardware overhead in the test pattern generator. The response compressor remain the same. Combining the scan register with LFSR, the patterns being generated have the same property as the LFSR being used. The test strategies that can be deployed using such a structure include pseudoexhaustive (see Figure 7.14) and pseudorandom. Centralized and embedded BIST (CEBS) is an example of this approach.

Figure 7.29 LFSR+SR structure for test-per-clock approach.
Built-in Logic Block Observation
Built-in logic block observation is a well know approach for pipelined architecture. The circuit diagram of a BILBO module and the architecture using BILBOs are shown in Figure 7.30. The BILBO has two control signals (B1 and B2) to configurate a BILBO block into a shift register, reset, MISR, and parallel load (normal). The BIST architecture using BILBO is shown at the right of Figure 7.30. For the test of C1, BILBO1 and BILBO2 are configured as MISR. If one looks at BILBO1, C1, and BILBO2 only, they are the same configuration as the one shown in Figure 7.12. The initial state of the BILBOs can be reset by a command of (01). The signature in BILBO2 can be shifted out by setting all the BILBOs into shift register mode by (00) command. With such a BILBO structure, multiple modules can be tested simultaneously through the careful scheduling of test resources. [Koenemann 1979]

Figure 7.30 BILBO circuit diagram and architecture
Test-Per-Scan BIST
Basic Structure
Test-per-scan approach aims at reducing the hardware overhead as much as possible. Instead of using LFSR and MISR for every input/output pins, this approach combine LFSR/MISR with shift register to minimize the hardware overhead. Figure 7.31 shows the basic circuit structure of a test-per-scan BIST. In BIST mode, LFSR generates test vectors and shifted to the inputs of the CUT via scan register. At the same time, the response are scanned in and compressed by the LFSR. Due to the use of scan chain for the delivery of test patterns and responses, the test speed is much slower than the previous approach. The clocks required for a test cycle is the maximal of the scan stages of input and output scan registers. Also fall in this category include CEBS, LOCST, and STUMP. We will discuss these in detail.

Figure 7.31 Basic test-per-scan structure
Centralized and Embedded BIST Architecture with Boundary Scan (CEBS)
Centralized and Embedded BIST Architecture with Boundary Scan l BIST (CEBS) expands the basic structure in Figure 7.31 to include internal scan chain in the scan path. The circuit diagram is shown in Figure 7.31. The test procedure is the same as the basic one. However, the test time can be very long due to the inclusion of internal scan chains. Such a design is well compatible with the scan DFT design. The extra cost in addition to the scan DFT is minimum. Hence, it is especially useful for the circuits with full scan DFT.[Komanytsky 1982]

Figure 7.32 CEBS architecture
Self-Testing Using MISR and Parallel SRSG (STUMP)
The architecture of the self-testing using MISR and parallel SRSG (STUMP) [Bardell 1987] is shown in Figure 7.33. Instead of using only one scan chain, it uses multiple scan chains to minimize the test time. Since the scan chains may have different lengths, the LFSR runs for N cycles (the length of the longest scan chain) to load all the chains. For such a design, the internal type LFSR is preferred. If the external type is used, the difference between two LFSR output bits is only the time shift. Hence, the correlation between two scan chains can be very high.

Figure 7.33 STUMP Architecture
Sequential BIST
The BIST techniques mentioned above either focus on combinational circuits or uses scan chains to transform sequential circuit into combinational in test mode. The patterns being applied is independent of test responses. Here, we would like to discuss the techniques that pertain the sequentiality of the circuit in the test mode. The test patterns being applied is not only a function of test pattern generator. It is also determined by the test responses. Since the responses are circulated back as the test patterns, it is also called circular BIST.
Cyclic Analysis Test System (CATS)
Cyclic analysis test system (CATS) is a typical example of circular BIST. The architecture of CATS is shown in Figure 7.34. In test mode, the outputs are fed back to the inputs directly. The responses are used as the test vector without modification. If there are more inputs than outputs, one output may drive multiple inputs. If there are more outputs than inputs, we can use XOR gates to do space compression, as the one shown in Figure 7.24. The hardware overhead is very low. However, the fault coverage is circuit dependent. The recyling of test responses might create the fault masking effects. Note that, fault masking here is different from the aliasing discussed earlier. Here, the faulty and fault-free circuits have different test patterns. [Burkness 1987]

Figure 7.34 Cyclic analysis test system architecture
Random Test Data (RTD)
Random test data (RTD) transforms internal flip-flops into MISR. The circuit structure is shown in Figure 7.35. In normal mode, the MISR is operated as latches. In test mode, it operates as MISR. Both internal responses are compressed into and the internal test vectors are generated from the MISR. RTD is able to do one test per clock cycle. As compare to CATS, the hardware overhead is much higher. However, due to the extensive use of MISR, the test responses are scrambled before being used as the test patterns. Hence, the self masking probability can be lowered.

Figure 7.35 Random test data architecture
Simultaneous Self Test (SST)
Instead of using MISR for internal memory devices, simultaneous self test (SST) uses a simpler structure. The circuit structure of SST in BIST mode is shown in Figure 7.36. In test mode, the internal latches receive the XOR of the result from the normal feedback path and the contents of the previous latch. As a result, the contents of the latches are scrambled by previous stages. In normal operational mode, the XOR gates are disabled. [DasGupta 1982].

Figure 3.36 Simultaneous self test architecture
BIST for Structured Circuits
Structured design techniques are the keys to the high integration of VLSI circuits. The structured circuits include read only memories (ROM), random access memories (RAM), programmable logic array (PLA), and many others. In this section, we would like to focus on PLAs because they are tightly coupled with the logic circuits. While, memories are usually categorized as different category. Due to the regularity of the structure and the simplicity of the design, PLAs are commonly used in digital systems. PLAs are efficient and effective for the implementation of arbitrary logic functions, combinational or sequential. Therefore, in this section, we would like to discuss the BIST for PLAs.
A PLA is conceptually a two level AND-OR structure realization of Boolean function. Figure 7.37 shows a general structure of a PLA. A PLA typically consists of three parts, input decoders, the AND plane, the OR plane, and the output buffer. The input decoders are usually implemented as single-bit decoders which produce the direct and the complement form of inputs. The AND plane is used to generate all the product terms. The OR plane sum the required product terms to form the output bits. In the physical implementation, they are implemented as NAND-NAND or NOR-NOR structure.

Figure 7.37 A general structure of a PLA.
As mentioned earlier in the fault model section, PLAs has the following faults, stuck-at faults, bridging faults, and crosspoint faults. Test generation for PLAs is more difficult than that for the conventional logic. This is because that PLAs have more complicated fault models. Further, a typical PLA may have as many as 50 inputs, 67 inputs, and 190 product terms [Liu and Saluja 198xxx]. Functional testing of such PLAs can be a difficult task. PLAs often contain unintentional and unidentifiable redundancy which might cause fault masking. Further more, PLAs are often embedded in the logic which complicates the test application and response observation. Therefore, many people proposed the use of BIST to handle the test of PLAs. So far, most PLAs in advanced microprocessors have BIST. Here, we would like to discuss some of them.
Yajima's PLA BIST
Yajima's scheme for the BIST of PLA is shown in Figure 7.38 [Yajima and Aramaki 1981]. Yahima’s scheme has the following extra hardware for the BIST of PLAs. (1) A modified Augmented Decoder (AD) which activate one bit-line in the AND plane at a time. (2) A Product Term Shift Register (PSR) shifts 1 in it to activate one product line to test OR plane. (3) Four extra product lines in AND plane for the parity of the AND plane and the control of the test procedure. (4) An AND Parity Circuit checks the parity of the product terms when one bit-line in the AND plane is activated at a time. (5) Two extra line in OR plane for the parity and control of the OR plane testing. (6) An OR Parity Circuit checks the parity of the sum terms when product terms are activated one at a time by PSR. (7) A Feedback Value Generator generates necessary control signals to control the test procedure. The use of FVG is based on the concept of autonomous testing.
In Yajima’s approach, the added hardware allows the PLA to activate one input bit-line at a time in the AND plane by AD in testing the AND plane. The result is verified by the AND Parity circuit. In testing the OR plane, one product term in activated at a time by PSR and the results are verified by the OR Parity Circuit. The correct parity is accomplished by the two extra lines, one in each plane. The autonomous control is achieved by the other extra lines. Yajima’s approach is able to detect all stuck-at faults in AND/OR planes, extra lines, AND/OR parity circuits, AD, and PSR. It can also detect all crosspoint faults in AND/OR planes, original lines and extra lines. The limitation is that multiple faults coverage is not guaranteed and the EXOR trees in the parity circuits influence the testing speed.

Figure 7.38 Yajima's PLA BIST.
Daehn’s PLA BIST
Daehn and Mucha proposed the BIST of PLA based on the use of BILBO [Daehn and Mcha 1981]. BILBOs are used for test pattern generation and and response analysis. Figure 7.39 shows the architecture of Daehn’s approach. Here, BILBOs are inserted in between the interface of input decoder, AND plane, OR plane, and output buffers. When testing the AND plane, BILBO1 works as the test pattern generator and BILBO2 as the response analyzer. Instead of functioning as a pseudorandom pattern generator, BILBO1 shifts a 1 in the input bit lines to activate one bit line at a time (similar to Yajima’s AD). While, BILBO2 is functioning as a MISR. For the OR plane testing, the situation is the same. This is a very simple approach as compare to the previous one. It achieves 100% coverage on single stuck-at faults and crosspoint faults. The most significant disadvantage is the area overhead of the BILBOs.

Figure 7.39 Daehn’s PLA BIST
Liu’s PLA BIST
Liu et. al. proposed the design which requires a rearrangement of the AND/OR plane on the basis of the number of crosspoints on the lines in the PLA [Liu 1987]. Figure 7.40 shows the architecture of Liu’s scheme. Different from the above methods, only one bit line and one output line are activated in the testing of AND/OR plane. The extra line Z1 with all the connection to the AND plane product lines is responsible for detecting the cross point at the intersection of the bit line (activated by TPG1) and the product line (activated by TG2). If there is a crosspoint, then, Z1 will produces an one. The crosspoint counter (C1) will be increased by one. At the end of the testing, the count in C1 indicates the number of crosspoints in product lines and/or in the plane. Simiarly, the same procedure is done for the OR plane. Such a technique is able to detect all stuck-at faults and crosspoint faults.

Figure 7.40 Liu’s PLA BIT.

BIST Applications
Manufactures are increasingly employing BIST in real products. Here, we offer several examples of such applications to illustrate the use of BIST in semiconductor, communications, and computer industrial.
Exhaustive Test in the Intel 80386 [Gelsinger 1987]
Intel 80386 has BIST logic for the exhaustive test of three control PLAs and three control ROM. For PLAs, the exhaustive patterns are generated by LFSRs embedded in the input registers. For ROMs, the patterns are generated by the microprogram counter which is part of the normal logic. The largest PLA has 19 input bits. Hence, the test length is 512K clock cycles. The test responses are compressed by MISRs at the outputs. The contents of MISRs are continuously shifted out to an LFSR. At the end of testing, the contents of LFSRs are compared.
Circular BIST in AT&T ASICs [Stroud 1988]
AT&T has employed a partial sequential approach using circuit BIST in seven ASICs. The goal was complete self-test except for I/O buffers and portions of the multiplexer logic on the inputs. AT&T’s approach uses a module similar to BILBO. In addition, BIST is provided for the embedded RAMs. There are four ASICs has embedded RAM. The logic overhead is about 20% and the area overhead is 13%. The average fault coverage is 92%. The large overhead is due to the small size of the chip. AT&T has automated BIST design tools for standard cell design.
Pseudorandom Test in the IBM RISC/6000 [Ratiu and Bakoglu 1990] [Yen et. al. 1995]
The RISC/6000 has extensive BIST structure to cover the entire system. In accord with their tradition, RISC/6000 has full serial scan. Hence, the BIST it uses is the pseudorandom testing in the form of STUMPS. For embedded RAMs, it performs self-test and delay testing. For the BIST, it has a on chip processor (COP) on each chip. In COP, there are an LFSR for pattern generation, a MISR for response compression, and a counter for address counting in RAM bist. The COP count for less than 3% of the chip area.
Instruction Cache BIST in Alpha AXP 21164 [Bhavsar and Edmondson 1994]
Alpha AXP 21164 is a super scalar implementation of Digital’s Alpha AXP architecture. It has an 8 Kbyte direct mapped cache array. The cache is organized into several columns of by-1 RAM arrays stacked side by side to support a data channel each. Figure 7.41 shows the BiST/BiSr structure of the cache. It covers all three RAM arrays associated with the cache, namely, data, tag, and branch history table. The data paths here contains Fill Scan Path, Read Scan Path, Address Generator, Background Generator, and the Failing Row CAM. Before packaging, the BIST do a BIST first. The failing rows are stored in the

Figuer 7.41 Instruction Cache BiST/BiSR of AXP 21164
Failing ROW CAM. If the third row fails, the “unrepariable cache” flag is raised to abort the testing. The next step is the laser repair of the rows in the Failing ROW CAM. After repair, BIST runs again to verify.
Embedded Cache Memories BIST of MC68060 [Crouch et al. 1994]
MC68060 has two test approach for embedded memories. First it has adhoc direct memory access for manufacturing testing because it has the only memory approach that meets all the design goals. The adhoc direct memory acess uses additional logic to make address, data in, data out, and control line for each memory accessible through package pins. An additional set of control signals selects which memory is activated. The approach make each memory visible through the chip pins as though it is a stand-alone memory array. For the burn-in test, it builds the BIST hardware around the adhoc test logic. The two-scheme approach is used because it meets the burn-in requirements with little additional logic.
ALU Based Programmable MISR of MC68HC11 [Broseghini and Lenhert1993]
Broseghini and Lenhert implemented an ALU-Based self-test system on a MC68HC11 Family microcontroller. A fully programmable pseudorandom pattern generator and MISR are used to reduce test length and aliasing probabilities. They added microcodes to configure ALU into a LFSR or MISR. It transforms the adder into a LFSR by forcing the carry input to 0. With such a feature, the hardware overhead is minimized. The overhead is only 25% as compare to the implementation by dedicated hardware.
 

multiple input signature register hardware

Hi jorgito,
I just want know BIST in FPGA prototype verification will infection in memory accessing timing, and make large delay in W/R cycle.
Some suggestion?
THX!
 

Re: cam build in self test

Hi jorgito
your link doesn't work.would you please upload the file?
do you have something about BISR?any thesis?

Best regards
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top