Hello KlausST,Hi,
I have not much experience with this...
But from my understanding a "clock" is a dedicated net spread over the chip to provide most equal timing.
Thus a "delayed clock" at a dedicated node is quite unusual.
This usually is what one needs with standard (non clock) signals. And this is how I´d try to solve it:
* generate a standard signal from the clock
* feed this signal through gates/ delay lines to generate the desired delay.
..but it won´t work as a usual clock anymore
..depending on your chip you may be allowed to use a standard signal as clock input for a DFF
***
We need to know more details about what you want to achieve.
If your data_edge is too close to the clock_edge so you can´t guarantee setup- and hold-timing, then the usual way is to use the opposite clock edge.
Klaus
Maybe in some special cases. (like feedbacked regulation loops)However, the output of the sequential block needs to be computed in the same clock cycle.
In my case, a control block generates at the rising edge of a "clock" an address that is fed at a RAM. The RAM accesses the address and propagates the contents at the output. This exact output needs to be read back from the control block in the same clock cycle.Maybe in some special cases. (like feedbacked regulation loops)
But in most cases a "delayed" processing does not reduce throughput. It just increases latency time - which in most applications is not problematic.
There are many examples where a delayed processing increases throughput: likewith a pipelined RISC processor.
Pipelining means delayed processing with the result of increased throughput.
Most real time processing systems use some kind of pipelining.
Klaus
Maybe i did not explain it well.Hi,
Maybe. I still have my doubts.
If I understand correctly, you say:
Data is written into RAM and in the same clock cycle these data has to be read from RAM. Is this possible?
For single port RAMs this is possible.
Even dual port RAMs (and FIFOs) may have problems to fulfill this.
If there is no feedbacked loop I strongly recommend to use pipelining techniques.
As said: in case you need assitance you should give more details about your application.
Klaus
Now I see.I am talking about a read operation from the RAM.
I tried to use this constraint, but it had no effect ( it did not insert any delay units/buffers). Also, i thought that constraint was used to "emulate" different duty cycles.set_clock_latency my help you. Synopsys ICC read this command to adjust clock latency during CTS.
Thank you for your input.you are confusing a lot of concepts into one. there is no such thing as delaying a clock because all delays are with respect to the clock itself. you should not change the reference (t=0). it doesn't solve anything.
what you have to do is:
- if you have a combinational loop, remove it. (I suspect you do)
- slow down your clock frequency. if the performance is not acceptable, accept that you have to live with the one cycle delay and pipeline it.
- let CTS handle the clock automatically. it can shrink/expand clock cycles on specific areas of the circuit.
- control CTS yourself. given how confused you are I won't even tell you the commands to achieve this.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?