By SCK-MOSI and MISO you mean the FPGA pins and by syncFF you mean the first DFF in the chain?I would put simple set_max_delay constraints on SCK-MOSI to syncFF and on MISO to syncFF for FPGA #2 and FPGA #1 respectively.
By SCK-MOSI and MISO you mean the FPGA pins and by syncFF you mean the first DFF in the chain?
The SPI interface as describe is not a synchronous interface, so don't even think of it that way, just consider it to be asynchronous and treat the signals that come from one FPGA and go to the other as asynchronous (i.e. you run it through 2 FFs before looking at the output of the second FF).
I would put simple set_max_delay constraints on SCK-MOSI to syncFF and on MISO to syncFF for FPGA #2 and FPGA #1 respectively. If you ensure the first sync FF is in the I/O then you won't even need the set_max_delay (except to verify the tools didn't inadvertently put them in the fabric FFs)
Sampling MISO on the falling edge of SCK may not be viable in this design. The synchronizer may cut into the turn-around time significantly. It looks like MISO could be transitioning at/after the falling edge of SCK, under certain assumptions on output registers and round-trip delays.
Round trip delay is a valid point. It's also no always so that you have a sufficient fast oversampling clock available. Alternatively, you may want to run the shift register from SPI SCK ("SCLK") and implement domain crossing logic for the parallel data.
O.K., we can do, despite of reasonability considerations. But I think, the "given" design isn't completely specified yet. It's clear that you have a SCLK synchronizer to the FPGA #2 240 MHz domain. But how is MISO being sent? Set on the synchronized rising edge? This means that MISO arrives at FPGA #1 most likely after the falling SCLK edge as seen by FPGA #1. MISO will be further delayed by the FPGA #1 synchronizer. You'll end up with sampling a valid synchronized MISO e.g. at the next falling SCLK edge.But please treat the design as a given fact. - a learning exercise.
Is the set_max_delay constraint always defined with reference to the system clock?Constraints can be used to limit the delay skew and push the IO delay towards its lower bound.
I'm pretty sure that any SDC delays are always relative to the clock that actually generates the output signal. If you specify a delay from some other signal that is not the generating clock, the constraint gets flagged as an ignored constraint.Is the set_max_delay constraint always defined with reference to the system clock?
Or can the reference point be any other signal?
Hello Kevin,I'm pretty sure that any SDC delays are always relative to the clock that actually generates the output signal. If you specify a delay from some other signal that is not the generating clock, the constraint gets flagged as an ignored constraint.
I see no reference to a clock here...are you saying that it would be ignored?# Apply a 2ns max delay for an input port (TSU)
set_max_delay -from [get_ports in[*]] -to [get_registers *] 2.000
What you asked in #11 was "Or can the reference point be any other signal?" and my answer is 'No', the reference point must be from a signal that actually generates the output signal.I see no reference to a clock here...are you saying that it would be ignored?
Round trip delay is a valid point. It's also no always so that you have a sufficient fast oversampling clock available. Alternatively, you may want to run the shift register from SPI SCK ("SCLK") and implement domain crossing logic for the parallel data.
Again,
I'm aware of the potential problems - and surely it's easier to avoid problems then try to deal with the ill effects.
This might be an issue too. If the SCK doesn't connect to an IO designed for clocking resources, there may be a potentially large delay within the clock network.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?