Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

propagation delay in fpga

Status
Not open for further replies.

p11

Banned
Joined
Jan 25, 2014
Messages
177
Helped
0
Reputation
0
Reaction score
0
Trophy points
16
Activity points
0
as we know that "after","wait" are not sythesizable so if i want to get a delayed version of a signal using even number of not gates then do we need any buffer like something between not gates or simply
Code:
delayed <= not (not (not(not(signall)))) ;

will work... to increase delay i may increase number of not gates .... i just need 1ns-2ns delay ...
 

To get actual logic cell delay generated, you need synthesis attributes that tell the FPGA compiler to keep the redundant logic, it's otherwise very effective in removing it, or use low level primitives. Methods may be slightly different between synthesis tools. Needless to say that the generated delay isn't very accurate and affected by process, voltage and temperature variations. Expect a worst case 1:2 variation range.
 

Not sure what you want to do. This is something else other than propagation delays.
Xilinx FPGAs provide adjustable IDELAY and ODELAY primitives in the device's I/O blocks. They are most often used to make small adjustments to I/O timing, but with some imagination you can find other creative uses. It's a bunch of delay taps that you can adjust on each IO for dynamic phase alignment solutions for high-speed source-synchronous interfaces. For these delay elements a fixed 200MHz clock should be used. For details, refer to your FPGA User Guide, and the special libraries Guide for your specific FPGA.
 
Last edited:

Back in the days of Xilinx 3000-4000 series parts I asked an FAE about the best case performance of the devices and was informed that the factory felt that a good rule of thumb would be something like 10% of the worst case delay in the fastest speed grade of the part. Which basically says the delay can vary far more than the 1:2 that FvM suggested, then again this was back in the 1.2 u days and perhaps the delay has tighter tolerance between slowest and fastest process, I haven't really had a reason to examine this.

The only reason I had to do this back then was due to a poorly done design that I was tasked to fix that made extensive use of clock gating logic (and I really mean clock gating logic) in an FPGA :shock:. I was told to fix it, not redesign it, based on the assurances of the now laid-off engineer assessment of the design's almost working functionality to upper management. Yeah right temp variations would and could make work or fail also different boards would behave differently depending on device lot code. This was all caused by timing race conditions between the clock gating signal and the clock. Adding delays via LUTs was the only way to fix a lot of the timing issues, hence the need to know the minimum timing.

I really really hope this isn't what you are dealing with or have designed. If so just redesign the circuit, otherwise it's probably just a latent failure waiting to reappear.
 

The original post is asking about gate propagation delays. A basic point is that FPGA don't have "gates" as building blocks rather than logic elements in a certain block structure. Besides delay of the logic element LUTs, there are considerable routing delays by the programmable interconnect elements. Taking this all together, it's difficult to generate predictable logic delays.
 

The original post is asking about gate propagation delays.
Well it wouldn't be the first time p11 has posted in the wrong section. Perhaps this was an ASIC implementation question. In that case you can probably get away with it or tell the tools to insert them, but I wouldn't know for sure as I've been out of the ASIC side for too many years.
 

as we know that "after","wait" are not sythesizable so if i want to get a delayed version of a signal using even number of not gates then do we need any buffer like something between not gates or simply
Code:
delayed <= not (not (not(not(signall)))) ;

will work... to increase delay i may increase number of not gates .... i just need 1ns-2ns delay ...

If you have to do this in your design, then there is a major problem with your design.
 
ok, so if i use it :-( then the delay may be varying depending on temp, voltage or even device ... right , but delay will occur na ? so whats the solution sir , if i want a fixed amount of delay in any board and under any situation then how to acheive it ..
whats the method of using idelay , odelay ...
 

ok, so if i use it :-( then the delay may be varying depending on temp, voltage or even device ... right , but delay will occur na ? so whats the solution sir , if i want a fixed amount of delay in any board and under any situation then how to acheive it ..
whats the method of using idelay , odelay ...

Unless you can clearly explain why you need a delay, I say your premise is flawed and you don't need it. Some IO connection with some circuitry in a different IC might be the exception, but it seems you are trying to do a delayed logic. That is not practical nor useful.
 

whats the method of using idelay , odelay ...

Make an bi-directional pin (with no external connections on the PCB), and run your signal you want to delay through the odelay and through the bi-directional I/O permanently enabled as an output. The output will drive the pin and it will also drive the input buffer of the bi-directional I/O cell. You can add the idelay if you want even more taps of delay available.

Then you need to add temperature compensation (and perhaps voltage compensation) for the idelay tap control to adjust the tap positions (i.e. the delay) based on the temp/volt readouts of the XADC. (the delay variations are due to the I/O input/output buffers, which vary a lot).

The real question is why you think you need to do this?

If it was me I would probably waste a PLL/MMCM output and drive a clock out that has the correct phase offset that I need to make the fixed delay I want. I would transfer the signal I wanted to delay by that much from an opposite edge (or whatever edge makes sense) to meet timing. Using this you may waste a clock and clock network, but you won't have to do all the compensation stuff as it is based on a tapped phase delay of a much higher frequency VCO, which will be much more stable and far less complicated. But once again why do you need this? I think you are trying to fix a problem with a solution that you've become fixated on and you aren't taking a step back to look at it from a different perspective (a seemingly common problem with today's engineers).
 

ok, so if i use it :-( then the delay may be varying depending on temp, voltage or even device ... right , but delay will occur na ? so whats the solution sir , if i want a fixed amount of delay in any board and under any situation then how to acheive it ..
whats the method of using idelay , odelay ...

You a counter and a clock enable.
 

You a counter and a clock enable.

I am thinking they are looking for a solution that involves 10's to 100's of ps, which means they are probably doing something wrong in their designs in the first place.
 

Dear OP,

First and foremost you have not explained what do you want to do with the "delayed version of a signal".

whats the method of using idelay , odelay ...
If you are clear on what you want to do AND have understood what others have posted, then the straightforward answer to this question is in #3.
Just google search "IDELAY, ODELAY + Xilinx" and read the Xilinx docu.

If you would really want to know how delays work on FPGAs, then study the IDELAY and ODELAY instantiated inside the Management module operating at 200MHz in the Xilinx gmii2rgmii IP.
https://www.xilinx.com/support/docu...on/gmii_to_rgmii/v3_0/pg160-gmii-to-rgmii.pdf
 
Last edited:

Dear OP,

First and foremost you have not explained what do you want to do with the "delayed version of a signal".


If you are clear on what you want to do AND have understood what others have posted, then the straightforward answer to this question is in #3.
Just google search "IDELAY, ODELAY + Xilinx" and read the Xilinx docu.

If you would really want to know how delays work on FPGAs, then study the IDELAY and ODELAY instantiated inside the Management module operating at 200MHz in the Xilinx gmii2rgmii IP.
https://www.xilinx.com/support/docu...on/gmii_to_rgmii/v3_0/pg160-gmii-to-rgmii.pdf








actually i have to just build some LFSRs that generate data at different rate . each having constant rate but are different from one another . now , i need to collect all this data in a memory .. bt since the clock of a memory is of fixed time period so i think , anyhow i have to design a clock whose time period is such that the memory collects data from all the lfsrs . So am trying to generate that clock signal internally which is synchronised with all the LFSRs ... so if i have 2 clocks , (each driving 2 lfsrs) then in testbench i found that if i "not " that clock and then "and" it with the original clock then i may get a clock of very small time period only at the rising_edge of clock signal . but this is only possible if the clock, and its noted version has atleast 2 ns delay . similarly the same is done for the 2nd clock. now if the "and" outputts of both the clocks if are "or" ed then i get a clock whose rising edge is at the rising edge of both the clocks.yes the duty cycle may not be 50 % bt atleast it works .so i need delay...
 

why dont i need delay... if i just and the clk and its noted version i think i should get logic "0";

ANDing the clock, as you are proposing, shows that you have a very unusual understanding of digital design and/or FPGAs. just use a PLL for your own sake.
 

And/or along with the PLL you might want to use clock enables instead of multiple clocks generating the various LFSRs. I'm not sure you have a solid grasp of synchronous digital design as you seem to want to design everything in an FPGA in an asynchronous way.

You should really try to stick to doing designs with only a single clock domain until you've mastered it.
 

And/or along with the PLL you might want to use clock enables instead of multiple clocks generating the various LFSRs. I'm not sure you have a solid grasp of synchronous digital design as you seem to want to design everything in an FPGA in an asynchronous way.

You should really try to stick to doing designs with only a single clock domain until you've mastered it.







but the LFSR s are driven by diffenr clocks ... yes i know i cant provide so many clocks in FPGA.. Actually i will get this data from external sources at different rate .. i cant understand , how to collect all this data in memory if memory clock is not synchronised with all this LFSRs.. please give any idea ..LFSR s are just build for checking.
 
Last edited:

The application topology is still rather vague. Up to now you have described LFSRs that are clocked at different rates. How come external clocks into play?
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top