Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Metastability in flops not too many engineers can answer.

Status
Not open for further replies.
Nice that you visit your thread again.
FvM, you keep missing the point.
I'm discussing metastability from a practical viewpoint, because I'm using synchronizers every day. So I'm quite sure that the said text book statements are correct.

As you surely noticed, I'm not the only one who can't follow your considerations. I don't see a purpose in repeating the arguments already said, nor trying to convince someone who doesn't hear.

In a short, you're saying that the unique and well-known text book theory about synchronizers is wrong. But the theory is pretty congruent with empirical results. I tend to the obvious assumption, that you simply didn't understand the theory.

Finally:
What happens if the incoming data is changing on every clock cycle?
1. You can't synchronize multibit data with a FF chain. The data can become inconsistent, because each bit can be only synchronized on it's own. It works under special conditions, e.g. gray encoded numbers that change incrementally, less than 1 count per clock cycle. So the discussion is clearly about single bit binary data.

2. Binary data from an unrelated clock domain, that change on every clock cycle have simply a 'X' result. No synchronizer can help.
 

Binary data from an unrelated clock domain, that change on every clock cycle have simply a 'X' result

Data changing on every cycle or not, the risk of randomness of data at the output of the 1st stage is always present. How then do synchronizers work correctly in most designs (in terms of propagating the original data through)? I saw a similar thread on comp.lang.verilog on this topic which ended up with an inconclusive answer.
 

Data changing on every cycle or not, the risk of randomness of data at the output of the 1st stage is always present. How then do synchronizers work correctly in most designs (in terms of propagating the original data through)? I saw a similar thread on comp.lang.verilog on this topic which ended up with an inconclusive answer.

Look at the following input signal with 2 edges:

000000001111111100000000

Assume that we have a synchronizer with enough stages to eliminate the metastable condition before the last stage.
If both edges cause metastability and a random bit, we have the following 4 possible output sequences after the synchronizer:

000000001111111100000000
000000001111111110000000
000000000111111100000000
000000000111111110000000

All 4 sequences are very clean and they have no glitches etc.
If any of these sequences cause a problem, the rest of the logic is badly designed.

The random bits always occur when the input signal changes, so any value for the random bit will give a clean output.
It is easier to treat the random bits as a random jitter.
 
  • Like
Reactions: FvM

    FvM

    Points: 2
    Helpful Answer Positive Rating
std_match, there are 8 bits in the input clock domain. Shouldn't they be reproduced as 8 bits at the destination side as well? Some of your output sequences have 9 and 7 bits.

Or maybe you're saying that since the input edge transitions themselves have caused metastability, there really is no reason to expect a correct replica at the output? As long as the inputs follow setup/hold times, integrity should be maintained, otherwise it doesn't really matter. Am I correct?
 

Transferring an input signal sourced from an unrelated clock domain implies, that setup- and hold times can't be kept. If the input changes within the setup/hold window, the result can't be predicted, it can be either the old or the new signal state. Metastability must not be involved at all, it's simply a case of signal skew and jitter making the sampling result unknown.

The example shown by std_match is not about multiple bits, it's a single binary signal with two edges. The edge position is however varying by one clock cycle.

If the edge position of binary signals becomes unsure, you can't transmit multibit signals, e.g. representing binary numbers, without additional measures reliably. There are different methods to overcome this issue:
- in special cases, gray encoding of the multibit data can help
- handshake or qualifier signals that mark a stable signal state. The transmission can occur only at a fraction of the clock speed
 

That's nothing new. The data crossing the async clock boundary must be always oversampled by asynchronous nature.. I mean when you send the data to other clock domain, how can you let the receiving block capture the data without oversampling ? However, there are some cases where oversampling is not used. One of the exceptions is domain crossing of pointers in fifo.

lostintranslation; When you say that's nothing new! Can you point to some book or published paper that talks about this?

and FVM, the very definition of metastability is based on on the assumption that setup/hold of a flop is violated. It has nothing to do with jitter. I'm not saying jitter can't cause metastability but if it does, its because of violating setup/hold times. I think Vijay is on the right track too.
 
Last edited:

lostintranslation; When you say that's nothing new! Can you point to some book or published paper that talks about this?
It's more of engineering common sense. Pick up any logic design books. They describe how to do hand-shake and hand-shaking between the different clock domains is essentially oversampling.
 
Last edited:

How about a book on metastability that talks about some of the concerns we raised here? I don't care about handshake, I know how to do handshake, but handshake is not what we are talking about here. The books or papers should be written for those who are learning not an experience engineer with comon engineering sense. In most books or papers I have seen, when they want to illustrate the concept of metastability, they throw in two clock domains with two back to back flops and do a lame explanation of how magically things are rosy past the 2nd flop with all signals resolving to exactly what you expect them to be without a clear explanation of some of the corner cases we tried to expose in this post. I stick to my initial point that there are no good books on this subject.
 

the very definition of metastability is based on on the assumption that setup/hold of a flop is violated. It has nothing to do with jitter.
I agree. But I think, it should be clarified, that violating the setup/hold times doesn't neccessarily produce metastability. Depending on the properties of the involved hardware, this happens perhaps in 1 of a million events, or less. It's a common misunderstanding, that when you get an 'X' in output, it would be a case of metastability. It's a case of timing violation, that implies a finite probability of producing metastability. In most cases, when you get an unpredictable result, it's just ordinary signal jitter.

P.S.: I don't know a book, that treats metastability comprehensively. The basic theory, including the calculation of probability of metastable events is discussed in an Altera document: www.altera.com/literature/wp/wp-01082-quartus-ii-metastability.pdf
 
Last edited:

Data changing on every cycle or not, the risk of randomness of data at the output of the 1st stage is always present. How then do synchronizers work correctly in most designs (in terms of propagating the original data through)? I saw a similar thread on comp.lang.verilog on this topic which ended up with an inconclusive answer.
True, but if the second stage is set not to use this data (over-sample), and if the input of the 1st stage is keeping the preious value for one more clock cycle, then you can apply the principals of MTBF+Meta-Stability to this circuit and safely assume that the probability of failure in the two back-to-back flops is negligible. However, the MTBF fails if the data is changing on every clock cycles and there are no provisions of over sampling the destination flops.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top