Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[SOLVED] Metastabilty and data loss

Status
Not open for further replies.

sherline123

Member level 2
Joined
Mar 9, 2016
Messages
46
Helped
1
Reputation
2
Reaction score
1
Trophy points
1,288
Activity points
1,611
Hi all,

After reading many posts in the forum, I started to get confused.
My understanding on metastability is it cause the output to be X. This mean it can be any values.
By having high MTBF, we have high probability to avoid metastability.
My question is does this mean we will have a correct output value? Or it just won't be intermediate value between "0" and "1" but could be wrong value?
As if the output value is wrong, it still might lead to data loss.
 

Hi all,

After reading many posts in the forum, I started to get confused.
My understanding on metastability is it cause the output to be X. This mean it can be any values.
By having high MTBF, we have high probability to avoid metastability.
My question is does this mean we will have a correct output value? Or it just won't be intermediate value between "0" and "1" but could be wrong value?
As if the output value is wrong, it still might lead to data loss.


Let me try to simplify things. Output going X is just a simulation thing. In Silicon, the output always settles to a definite value after having intermediate values between 0 and 1. With synchronizers, FIFO or other mechanisms to reduce metastability you can not guarantee correct output but you can make sure that output is stable. Generally when you are crossing clock boundaries, the sender should keep the data stable for certain clock cycles to be sampled by receiver.

Golden rule is:
You can have delay using synchronizer but you will have stable value at output.

As a result of above golden rule, downstream logic will be safe.
 

You should ask yourself what you consider wrong data?

1. Single bit data, toggling at arbitrary times. You'll either read the old or the new data. Both or equally correct, just different.

2. A multibit entity. Individual bits can be expected to have a certain skew. If you manage to sample a data word while the value is changing, you may get a wrong data value. Example 0x7ff changing to 0x800. You may eventually sample 0x000, 0xfff or any value in between. Multi bit data must be transferred consistently, e.g. by using a domain crossing FIFO or some kind of handshake.
 

You should ask yourself what you consider wrong data?

1. Single bit data, toggling at arbitrary times. You'll either read the old or the new data. Both or equally correct, just different.

2. A multibit entity. Individual bits can be expected to have a certain skew. If you manage to sample a data word while the value is changing, you may get a wrong data value. Example 0x7ff changing to 0x800. You may eventually sample 0x000, 0xfff or any value in between. Multi bit data must be transferred consistently, e.g. by using a domain crossing FIFO or some kind of handshake.

Wrong data to me is unexpected output. Let's say I am sending in a '1' to synchronizer and the output is '0'. To me, this is wrong output value.
The reason I am asking this question is IF input data frequency is different with destination clock frequency, when you fail to sample then it will just miss the data and lead to data loss.
 

Hi,

Wrong data to me is unexpected output. Let's say I am sending in a '1' to synchronizer and the output is '0'. To me, this is wrong output value.
X means the state is undefined = unknown.
So if you send a "1" .... and you can't be sure the output is "1", then the output hast to be considered "false".
It doesn't matter if the output really is "0", or something "intermediate" (that at the end still will be interpreted as "0" or "1") or by accident you get a "1" ... you can't rely in the output.

The reason I am asking this question is IF input data frequency is different with destination clock frequency, when you fail to sample then it will just miss the data and lead to data loss.
This has nothing to do with "metastability". This rather is "undersampling".

Thus your target clock frequency hast to be synchrounous to the data, or according nyquist higher than twice the dara rate.
In detail it depends on the interface and it's specification.

Klaus
 

Wrong data to me is unexpected output. Let's say I am sending in a '1' to synchronizer and the output is '0'.
That's not possible.
 

That's not possible.

When metastability happens, then it is possible?


Hi,


X means the state is undefined = unknown.
So if you send a "1" .... and you can't be sure the output is "1", then the output hast to be considered "false".
It doesn't matter if the output really is "0", or something "intermediate" (that at the end still will be interpreted as "0" or "1") or by accident you get a "1" ... you can't rely in the output.


This has nothing to do with "metastability". This rather is "undersampling".

Thus your target clock frequency hast to be synchrounous to the data, or according nyquist higher than twice the dara rate.
In detail it depends on the interface and it's specification.

Klaus

This is what i am asking. Putting synchronizer only guarantee output is defined signal (0 or 1) but it does not guarantee it is the correct output.
If metastability happens, the output still can be wrong but it is a defined signal. Is this correct?
 
Last edited:

Hi,

If metastability happens, the output still can be wrong but it is a defined signal. Is this correct?
No, metastability = X = undefined

But a synchronizer output should be considered as a defied logic level.
Afaik a sysnchronizer can not completely 100% metastability at it´s output, but it greatly reduces the chance for metastability.
Many papers recommend a two stage synchronizer to minimize the chance for metastability to very close to zero.

Klaus
 

Hi,


No, metastability = X = undefined

But a synchronizer output should be considered as a defied logic level.
Afaik a sysnchronizer can not completely 100% metastability at it´s output, but it greatly reduces the chance for metastability.
Many papers recommend a two stage synchronizer to minimize the chance for metastability to very close to zero.

Klaus

Okay sorry, I overlook when typing this. Metastability output will be X.
Synchronizer only lower metastability probability but the output still can be wrong (as defied logic).
Is this correct?
 

Hi,

Metastability is caused by violating setup and hold timing.
On may avoid it by using a synchronized signal. This may be impossible.

Then yes: reduce the chance for metastability by using synchronizers.

But you may use oversampling and filtering techniques in a way that you completely can avoid wrong signals.
With two signals of known different clock frequencies - there is no change that metastability happens on two folowing clock signals.

Please note:
Metastability does not happen randomly. (although it sometimes may look like)
Metastability does not happen when the input signal is HIGH or LOW.
Metastability just happens when the input signal is in transition between HIGH and LOW while the FlipFlop_active_clock also is in transition.

Draw this situation on a paper and let random people decide whats the expected output. Some will say HIGH, some will say LOW, so´me will say "don´t know". --> there is no "right" and "wrong" ... but "don´t know" is not acceptable in a digital system.
The synchronizer just reduces the "don´t know" to almost zero. But still there is the "HIGH" and "LOW" problem. Your reciver should be able to handle it.

Klaus
 
Okay sorry, I overlook when typing this. Metastability output will be X.
Synchronizer only lower metastability probability but the output still can be wrong (as defied logic).
Is this correct?

Yes. Even with a syncronizer if bits are sampled as they're changing you'll randomly end up with either the old value or the new value.

In a multi-bit vector you can end up with a mix of old bits and new bits giving you a value that's 'wrong' (it's not the old value or the new value).
 
The OP apparently doesn't care for consistency of multibit data. I already asked about this point and he insists on discussing correctness of single bit data.

Asynchronously sampled single bit data won't be incorrect. By nature of asynchronous processing, the data changes at arbitrary times related to the sample clock. Respectively you don't know in advance if a value sampled near a transition is the old or the new one.

A simple example is an asynchronously sampled UART signal. The sample rate must be high enough to reproduce the bit stream, e.g. fourfold bit rate. Samples near the edges may be either low or high, you still get the correct bit pattern.
 

I not sure how to understand multibit data if I cant even understand how a single bit data works.
But thanks all, I had find the answer.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top