Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

How to write timing constraints

Status
Not open for further replies.

rk_learn

Newbie level 2
Joined
Aug 7, 2011
Messages
2
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,281
Activity points
1,297
Hi,

I would like to understand the procedure involved in writing timing constraints for a full chip.
And how to convert the full chip constraints to block level constraints.

Currently I am following this procedure:

1) Read the netlist
2) Do check_timing and find out all the flops which dont have a clock defined.
3) Trace the input ports that are driving these flops and define clock on them.
3) Continue using check_timing's feedback and add the constraints in a iterative manner.

Please let me know if there are other methods.

Thanks
KR
 

kbulusu

Full Member level 2
Joined
Apr 23, 2003
Messages
138
Helped
27
Reputation
54
Reaction score
21
Trophy points
1,298
Activity points
900
What you have described works well if you dont have access to the design spec...
if you know all the primary clocks, then define that and this should take care of all the flip flops..
if there are clock dividers etc, you would need generated clock constraints on that
Next if there are any asynchronous clocks, then you need either set_clock_groups -asynchronous or create set_false_paths between all paths in the respective clock domains..if you have a lot, then set_clock_groups make sense
For IO constraints, Define virtual clocks with same period as the real clock ....next once you know what IO's need to be timed wrt what clock, you can define the input/output delays
Next if there are any muxes and with help of functional spec, if only one clock has to be propagated or if you have 2 clocks system clk and test clock, then you need to constraint the select pin of muxes
find out if there any components in the design which require more than 1 cycle..like slower memories etc..multicycle them
next if you have implemented clock gating for power reasons, then add latency constraints to this...
Check if there any half cycle paths and understand if they make sense..else its constraints issues
define clock uncertanity/jitter margins
if there is a IO-comb-IO, you can constrain by max/min delay combinations...

optinally define transisition/slew thresholds etc...



I hope this helps...

---------- Post added at 22:56 ---------- Previous post was at 22:51 ----------

BTW, converting full chip constraints to block level is not straight forward and easy..much of it depends on how your top level clock architecture is ...you would need to budget it such a way that each block gets enough time and also IO budgeting has to be correct...if your design is complex enough, its a nightmare..try using any budgeting tool like hydra/first encounter/tools from atoptech etc...these tools read your top level sdc, do partitioning/shaping/top level cts, figures out what the actual delays are and then generate budget for each block...you can take this and refine it in subsequent runs..once you closed the block, you can pull the timing constraints back to top and integrate it...

good luck...
 

qual_ti

Member level 1
Joined
Jun 21, 2011
Messages
37
Helped
3
Reputation
6
Reaction score
3
Trophy points
1,288
Activity points
1,525
very good info...............
 

ee1

Full Member level 2
Joined
May 31, 2011
Messages
120
Helped
1
Reputation
2
Reaction score
1
Trophy points
1,298
Activity points
2,036
great info!
can you please elaborate some more about thess issues:

For IO constraints, Define virtual clocks with same period as the real clock ....next once you know what IO's need to be timed wrt what clock, you can define the input/output delays
How does the virtual clock helps here?

next if you have implemented clock gating for power reasons, then add latency constraints to this...
The tool doesnt calculate this latency?

Check if there any half cycle paths and understand if they make sense..else its constraints issues
How can i check for half cycle?

if there is a IO-comb-IO, you can constrain by max/min delay combinations...
Should the set_input_delay/set_output_delay do this?

Thanks!
 

pavanks

Full Member level 2
Joined
Jan 19, 2009
Messages
134
Helped
30
Reputation
60
Reaction score
28
Trophy points
1,308
Activity points
2,020
great info!
can you please elaborate some more about thess issues:


How does the virtual clock helps here?


The tool doesnt calculate this latency?


How can i check for half cycle?


Should the set_input_delay/set_output_delay do this?

Thanks!

For ur last Q here is the answer

You can not define input and output delays for a comb logic, since there is no clock. For comb u need just the max and min path, like the comb logic between ur flops.

For third Q

You can check the half cycle path by checking the negative slack which is half of ur period.


For ur second Q

The tool calculates it after CTS stage. but before that u need to give the values.


For ur first Q

Since you dont have the clock for ur output port and you do know the period of that clock. you can do this using virtual clocks.


Hope it helped.
 
Last edited:
  • Like
Reactions: ee1

    ee1

    Points: 2
    Helpful Answer Positive Rating
Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Top