mapleafrye
Newbie level 1
- Joined
- May 31, 2009
- Messages
- 1
- Helped
- 0
- Reputation
- 0
- Reaction score
- 0
- Trophy points
- 1,281
- Location
- Germany
- Activity points
- 1,298
Hi,
In the recent research work I want to use design compiler (DC) to reduce the number of FFs in my design (the basic idea is to compare the primary outputs (POs) of one fault-free sequential design and one copy of this design - everything is the same except that one fault is injected in specific location, and one "error" signal is generated if POs are different.)
The problem I met is that the logic synthesis of DC seems to have very limited effects on the reduction of number of FFs (almost the same as before synthesis) for the constructed two sequential designs with PO comparison.
To try one extreme case, I even constructed two EXACTLY the same modules - two instantiations with same inputs, and compare the POs (View attachment test.zip) . Theoretically all the POs are the same and the "error" should be constant 0.
I expected that DC will optimise out all the FFs in two instantiations and reduce the number of FFs to 0, with one "assign error = 1'b0" left in the generated netlist.
However, the synthesis netlist shows that all 10 FFs are still there, even with the DC command "set_optimize_registers" and "compile_ultra -retime".
It seems DC didn't explore and find the equivalence of two exactly same module, therefore just generate the normal flattened netlist.
For this obvious case, the synthesis result of DC is a little surprising. I'm not sure whether it's due to that I didn't use the right DC commands or there are intrinsic limitations of the optimisation strength of DC?
Your opinion is really appreciated!
Thanks in advance.
In the recent research work I want to use design compiler (DC) to reduce the number of FFs in my design (the basic idea is to compare the primary outputs (POs) of one fault-free sequential design and one copy of this design - everything is the same except that one fault is injected in specific location, and one "error" signal is generated if POs are different.)
The problem I met is that the logic synthesis of DC seems to have very limited effects on the reduction of number of FFs (almost the same as before synthesis) for the constructed two sequential designs with PO comparison.
To try one extreme case, I even constructed two EXACTLY the same modules - two instantiations with same inputs, and compare the POs (View attachment test.zip) . Theoretically all the POs are the same and the "error" should be constant 0.
I expected that DC will optimise out all the FFs in two instantiations and reduce the number of FFs to 0, with one "assign error = 1'b0" left in the generated netlist.
However, the synthesis netlist shows that all 10 FFs are still there, even with the DC command "set_optimize_registers" and "compile_ultra -retime".
It seems DC didn't explore and find the equivalence of two exactly same module, therefore just generate the normal flattened netlist.
For this obvious case, the synthesis result of DC is a little surprising. I'm not sure whether it's due to that I didn't use the right DC commands or there are intrinsic limitations of the optimisation strength of DC?
Your opinion is really appreciated!
Thanks in advance.