Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

LBIST low test coverage

Status
Not open for further replies.

vijay82

Member level 2
Joined
Jan 13, 2007
Messages
52
Helped
6
Reputation
12
Reaction score
7
Trophy points
1,288
Activity points
1,724
Why is LBIST test coverage low compared to deterministic pattern-based ATPG despite the fact that the most commonly used PRPGs are maximal length LFSRs and thus generate all 2^n possible (bar one) input combinations for an n-input combinational circuit?

Ultimately, ATPG patterns, though limited in number, will still be part of the universe of 2^n total possible patterns generated by a PRPG.

(Yes, I know about random-pattern resistant faults but that is beside the point given the 1st-level reasoning above doesn't pass muster).
 

Why is LBIST test coverage low compared to deterministic pattern-based ATPG despite the fact that the most commonly used PRPGs are maximal length LFSRs and thus generate all 2^n possible (bar one) input combinations for an n-input combinational circuit?

Ultimately, ATPG patterns, though limited in number, will still be part of the universe of 2^n total possible patterns generated by a PRPG.

(Yes, I know about random-pattern resistant faults but that is beside the point given the 1st-level reasoning above doesn't pass muster).

Maybe you have a very specific test case in mind? I don't agree with the assertion that LBIST is always low. Or always high, for what is worth. LBIST fits some circuits, doesn't fit others. Good luck generating all 2^n inputs for a circuit with many inputs.
 

Low LBIST coverage is a well-documented and researched fact, not subjective opinion based on random observations seen in a corporate project or two. Refer to countless LBIST-focused published papers and standard DFT texts (Abramovici, Wang etc) for more on that plus how large n-input circuits are broken down into smaller sub-circuits by making the entire design scannable and scanning in the PRPG output serially through one of its stages (aka the STUMPS architecture).

And therein lies the solution to this question: PRPG outputs when applied in parallel to an n-input CUT will give the same coverage as deterministic patterns. But when the STUMPS architecture (the most widely one used) is used, with the PRPG length being p and the scan length a different k, all expected patterns for the 2^k input CUT may not be generated at all. The missed patterns make it especially difficult to target random-pattern resistant faults -- which is why deterministic "top-off" patterns (hybrid BIST) used or test pointed inserted (adds to area overhead) to fill the gap.

BTW, LBIST usage is based on application - automotive, military - where long-term reliability (through in-field testing) is paramount and/or high storage, high test time through an ATE is undesirable. Not on the specifics of the circuit itself.

For those interested, the book Digital Systems Testing and Testable Design describes the p vs k conundrum in much fascinating detail in the BIST chapter with specific examples. Most places explaining the low coverage LBIST disadvantage unfortunately assume the reader to know of serial input architectures so unless I'm made to believe random patterns applied in parallel will have any different coverage than deterministic ATPG, I'll take this to be the final solution.
 

Low LBIST coverage is a well-documented and researched fact, not subjective opinion based on random observations seen in a corporate project or two. Refer to countless LBIST-focused published papers and standard DFT texts (Abramovici, Wang etc) for more on that plus how large n-input circuits are broken down into smaller sub-circuits by making the entire design scannable and scanning in the PRPG output serially through one of its stages (aka the STUMPS architecture).

And therein lies the solution to this question: PRPG outputs when applied in parallel to an n-input CUT will give the same coverage as deterministic patterns. But when the STUMPS architecture (the most widely one used) is used, with the PRPG length being p and the scan length a different k, all expected patterns for the 2^k input CUT may not be generated at all. The missed patterns make it especially difficult to target random-pattern resistant faults -- which is why deterministic "top-off" patterns (hybrid BIST) used or test pointed inserted (adds to area overhead) to fill the gap.

BTW, LBIST usage is based on application - automotive, military - where long-term reliability (through in-field testing) is paramount and/or high storage, high test time through an ATE is undesirable. Not on the specifics of the circuit itself.

For those interested, the book Digital Systems Testing and Testable Design describes the p vs k conundrum in much fascinating detail in the BIST chapter with specific examples. Most places explaining the low coverage LBIST disadvantage unfortunately assume the reader to know of serial input architectures so unless I'm made to believe random patterns applied in parallel will have any different coverage than deterministic ATPG, I'll take this to be the final solution.

was there a question in this thread at all?
 

Yes, and it was not rhetorical.
 

LBIST coverage is a tradeoff : if you want more cov , you need more logic --> more area, power, testtime ,...

Tiep Ngo
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top