well, what i did was, select appropriate buffers for those nets, i mean clock-tree buffer for clocks and reset buffer of reset (depends on asic vendors), set_dont_touch on those nets during synthesis. Proceed as per normal for post synthesis flow.
i don't think they make any difference from the synthesis point of view. but i am not sure when it comes to post synthesis flow, perhaps just a technique to differentiate between the clocks and the reset signals.
The issue here is NOT the mechanics or semantics of Synopsys.
There is a major difference between Clock Tree & Reset Tree - in regards to correct design practices.
1. Clock Tree must always be 'Skew Balanced' to avoid synchronous skips & races.
2. Reset Trees - especially for those cases where the Reset is Asynchrounous - MAY not be 'Skew Balanced' (in most of the times).
3. Reset Tree can be MORE loaded than Clock Tree - e.g. - a relaxed DRC rule cab be set for Reset Tree - since Flip-Flops unstable behavior is less sensitive for slow slew rates in the Reset input (while the Clock input is).
4. Some ppl consider synchronizing the Reset input signal with the main System Clock. While this is a correct practice to avoid Metastability at the trailing edge of Reset, some skew problems may arise. For those cases, carefull STA must be run to alert the designer.
thanks for your info, i learned a great deal from that. One question, what is the point of using reset tree if they are not "Skew Balanced". Perhaps, if your design will always go to a known state after an asynchronous reset, then you will avoid the risk of getting into metastability. State-machine design is a good example.
thanks for your info, i learned a great deal from that. One question, what is the point of using reset tree if they are not "Skew Balanced". Perhaps, if your design will always go to a known state after an asynchronous reset, then you will avoid the risk of getting into metastability. State-machine design is a good example.