Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[DFT] Current challenges for Testing?

Status
Not open for further replies.

maulin sheth

Advanced Member level 2
Advanced Member level 2
Joined
Oct 24, 2010
Messages
502
Helped
90
Reputation
179
Reaction score
90
Trophy points
1,318
Location
Bangalore, India
Visit site
Activity points
4,161
Hello all,
Can anyone tell me that what are the challenges for DFT(Design for test)?
What types of issues compnaies are facing to do DFT?

Regards,
Maulin Sheth
 

ok. thanks.
Any other challenges?

Hello rca,
Mostly company doing with timing simulation or without timing simulation? Which one is more preferable? If we want to choose single option, thn what to choose?

Regards,
Maulin Sheth
 

1. As rca mentioned pattern size vs coverage. Pattern count is increasing due to increase in number of nodes (gate count). To get coverage over 99%, Pattern size is very big. This results in increase in tester time and eventually increase cost of testing.

2. SOC design have many IPs in the design like DDR, USB, serdes, Memories, HDMI etc. Full testability of IP require a lot logic around the IP. This results in increase area, congestion.

3. Increasing in DFT logic results in increase in leakage power of design.

4. Sometimes, we needs to insert MUX/OCC in critical path to control clock which leads to timing violations.
 

    V

    Points: 2
    Helpful Answer Positive Rating
Hello Yadav,
Thank you very much.

OCC means On chip controller. It is used to control clock in capture and shift mode of DFT.
 
  • Like
Reactions: ivlsi

    ivlsi

    Points: 2
    Helpful Answer Positive Rating
Can compressed scan reduce number of test pattern or it's just intended to increase a number of scan chains with a limited number of pins? are they the same things?
 

Compress scan can not reduce the patterns, but it reduces the time.
Its also mainly used if we have limited number of pins. So you are correct.
 
  • Like
Reactions: ivlsi

    ivlsi

    Points: 2
    Helpful Answer Positive Rating
Compress scan is here to reduce the number of data required to reach the same coverage.
Compress scan is generally associated with a difference between the pads usable for the scan and the number of internal scan chain.

- - - Updated - - -

It is better to simulate with timing (best at least) to check the hold time, as during the scan shoft, the data path is the smallest between two flops.
 
  • Like
Reactions: ivlsi

    ivlsi

    Points: 2
    Helpful Answer Positive Rating
What about LEC (Logic Equivalent Check) after insertion of DFT logic?

Actually an original RTL, which comes from designer, doesn't contain DFT implementation. So, how to know that DFT circuits was not corrupted after P&R?

Should P&R guys insert DFT circuits, but ATPG should be generated by RTL guys? What's the flow?

Can be RTL verified for LEC just in functional mode (by stacked the TEST pin to inactive value)?
 

LEC is just check that functionality should not effected after DFT insertion.

DFT flow is like :
RTL Design -> DFT -> ATPG -> SImulation
| -> Physical Design -> Pattern generation of PD netlist -> Simulation
So after PnR, DFT engineer need to generate pattterns again, so we can know by the DFT violations on PnR netlist.
ATPG is not generated by RTL guys, It generated by DFT engineer only after DFT insertion.

Hope it helps
 

    ivlsi

    Points: 2
    Helpful Answer Positive Rating
    V

    Points: 2
    Helpful Answer Positive Rating

    abgohil013

    Points: 2
    Helpful Answer Positive Rating
1) Should ATPG patterns be generated twice - first time on RTL, which include DFT, and the second time on the synthesized RTL(Netlist), which include the same DFT circuits?
2) What are DFT circuits, which are included in RTL? Is this also synthesizable RTL (muxes, etc)?
3) Why ATPG patterns should be simulated on both RTL and then on Netlist? How can I know whether they passed or not - this is not a functional simulation...
4) Does generation of ATPG patterns on Netlist take much more time than on RTL? Why not just to take the patterns, which were generated for RTL and apply them on Netlist and then compare results clock-by-clock on outputs?
5) What are the output files from ATPG generator? Input Vectors + Expected Results?

LEC is just check that functionality should not effected after DFT insertion.
- how do I tell to LEC tool it should ignore the DFT circuits?
 

Answering serially:

1. There is no point in generating patterns for RTL. DFT is done on a synthesized netlist and hence patterns need to be generated only on DFT inserted netlist. Moreover all the DFT tools take the netlist as an input not the RTL.
2. DFT circuitry means all your flops will be converted to scannable flops ( a flop with a mux to choose between your primary input and Scan input). In addition to that, if at all you plan to use a pattern compression technique (eg. EDT) then the RTL for that will be included which will have a decompressor and a compactor. And yes, all of them are synthesizable. So relax.
3. I don't know about RTL but patterns need to be simulated on the netlist before they are signed off as clean from DFT perspective. Your simualtion will write out a log file for you where you can go and check. Eg: At the end of the logfile you can check for the message: "No Error between simulated and expected". For better assurity, you can check for "Mismatch" in your log file. If at all you are doing timing simulation then it becomes important to check whether the timing was annotated properly or not. To check that you can go search for "Backannotation successfully Completed message" in your log file and for better confirmation you can also grep "VSIM-SDF" warnings/errors in your logfile and study them.
4. --------------
5. It generates testbench and vector files which essentialy consist of as you said "Input Vectors + Expected result".

For LEC tool, you can add pin constraints to the Scan_Enale pin as 0 to both Golden and revised netlist which you are going to use in your run in the setup mode i.e.

set sys mode set
add pin constraints scan_en 0 -golden
add pin constraints scan_en 0 -revised
(Please check the syntax properly).

Hope my answers will help you get some clarity.
 
  • Like
Reactions: ivlsi

    ivlsi

    Points: 2
    Helpful Answer Positive Rating
Plzhelp, Thank you for your help! ;-)

Is BIST also synthesizable? Are all DFT written in RTL or also some Netlist are used?

I still don't understand why I need to simulate ATPG vectors... Why not using STA tools? Is there a reason for functional verification of the vectors?

Should some manually created vectors to be added to automatically generated ATPG?

Thank you!
 

Plzhelp, Thank you for your help! ;-)

Is BIST also synthesizable? Are all DFT written in RTL or also some Netlist are used?

I still don't understand why I need to simulate ATPG vectors... Why not using STA tools? Is there a reason for functional verification of the vectors?

Should some manually created vectors to be added to automatically generated ATPG?

Thank you!

Hi..
Any logic which is going to be a part of your chip needs to be synthesizable. Be it Scan Technique or your Bist everything is synthesizable. DFT is not written in RTL. We don't really code for DFT.
We insert the Scan Circuitry in an existing netlist. For Eg. We use a tool called DFTAdvisor to insert the DFT scan circuitry in a synthesized netlist. This tool automatically converts the normal flops to scannable flops and hence problem is resolved.

Now take an example of an AND gate. To check the complete functionality of the AND gate you will have to check all the 4 combinations of your truth table. But now if you have to check for output of AND gate is stuck at 0 or 1, you just need 2 combinations (00 & 11). These 2 patterns will not only assure that you output is stuck at 1 or 0 but also checks for these faults at the input pins as well. So we save 2 patterns testing.

What I really mean to say using the above example is that your ATPG patterns check for your manufacturing defects like stuck at 0 or 1 or other defects but it doesn't guarantee that if your chip doesn't have any manufacturing defect it will function correctly. In order to insure that your chip is functionally correct we do functional verification. Now you can also say that I don't need to create ATPG patterns at all because if I test all my functional patterns for the chip then I also insure that my chip is free from manufacturing defects as well. Thats true but just consider the AND gate again and analyze. We save 2 patterns during ATPG simulation compared to Functional. Now just think the reduction in patterns and hence time of a big chip. Testing Equipment is very costly. Its cost is directly proportional to the amount of time spent on it to test the chip. I hope you understood why we need ATPG after this discussion.

STA tool is for timing. How do you think it will figure out manufacturing defects??
Nothing is done manually these days. You have proper tools to do everything starting from Scan insertion to pattern generation to Simulation.

Regards.
 
  • Like
Reactions: ivlsi

    V

    Points: 2
    Helpful Answer Positive Rating

    ivlsi

    Points: 2
    Helpful Answer Positive Rating
Thanks again, but I heard several times that functional vectors should be added to generated ATPG for production.
My question is why? My guess is for testing of asynchronous logic, which is not a part of the Scan Chains. But probably there are other reasons... Do you know them?

Again, why to run functional simulation on the Netlist after DFT insertion? Why not just use STA tools, which also check timing? Why running these simulations so important?
 

Why ATPG patterns should be simulated on Netlist? What purpose? I heard it's required to test scan chain connections, but I don't understand why not to use STA, but Gate-Level simulation instead?
 

Hello ivlsi,

As per my experience, we just need to check the design with timing so we do timing simulation on netlist.
It also checks that timing closure is perfectly fine or not?
Some times, we get the hold violation in with timing gate level simulation also. This can also improve the STA flow by removing such type of hold violations.

we need to check the scan chain connections first because we just need to check that scan chain integrity is working fine or not, because if patterns fail thn this integrity is useful to diagnose the failure.
We are taking SDF from STA guy and run gate level simulation on Post layout netlist with SDF.

Hope it helps.

- - - Updated - - -

Functional vectors are added into ATPG vectors to increase the test coverage only. Because some type of physical defects can be covered by functional vectors also.

- - - Updated - - -

Functional vectors should run on DFT netlist, just to check that functionality is perfectly working fine or not after DFT inserted netlist. Because, DFT is inserting some controlling logic and observing logic to increase testability.
So that's why we do Simulation of functional vectors after DFT insertion.

Hope it helps.
 

STA does not check the functionality so the scan chain connection is not covered by STA, neither LEC, because there is no reference to used in the comparaison.
if ATPG is able to generate vector, that' means the scan chain connection is correct.

- - - Updated - - -

Using functional patterns to increase coverage is very sensitive for me.
I means how you garantee that all flops values are check by your functional tests?
What I prefer his to indicate which pins of macro block will have the stuck high/low value covered by an functional test, and the ATPG could indicate them as covered (like all pins of memories which will be properly covered by the BIST, same for analog pins as the test engineer will stimulate differents values...).
 

Hello rca.
Functional vector can be useful to improve coverage. You might be knoe fault grading, in which we just apply functional vectors to ATPG, so it can get the faults which are covered by Functional vectors, so after, these faults are considered as detected, thereafter ATPG target the remaining faults. The coverage by functional vector is mostly around 10% but yea mostly design dependent.
In some design area, we can not control the design node but can control by functional mode, so in such types of situation, functional vectors can used to improve coverage..
And another thing is that, Functional pattern is always applied to chip, so we can calculate that which faults are already covered by already applied functional patterns, so it is also useful in pattern count reduction also.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top