Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

FIFO used for safely transfer of data

Status
Not open for further replies.

sun_ray

Advanced Member level 3
Joined
Oct 3, 2011
Messages
772
Helped
5
Reputation
10
Reaction score
5
Trophy points
1,298
Activity points
6,828
A data of width 8 is being written in the FIFO at a clock frequency f1. The data is read from the FIFO with a width of 16 at a frequency f2. Will not in this case functionality be affected because data of 8 bit is coming at write side. But the data is only being read with width 16 bit and then processed in the new clock domain at 16 bits instead of 8 bits as the width as it is written in the FIFO. Will not it affect the functionality of the whole system?
 

No problem will occur if you make sure that:
1)write speed is not too high that FIFO memory becomes full.
2)read speed is not too high that FIFO is empty.
 

This is done all the time. It all depends on your clock frequencies, burst rate, etc.. Maybe ask a more specific question.
 

I recently worked with a similar synchronizer. As mentioned, FIFO depth and read/write rate will decide the operation desired.
 

vipinlal/barry/avi3467

See data of width 8 is coming out from the design unit at a write clock frequency and being written to FIFO with data width as 8. But in read clock domain you have concatenation of two consecutive data of the write clock domain. So these cocatenated 16 bit data will be processed at read clock frequency in the design unit that is running at read clock frequency. So in the read clock domain the data from the design unit that is running at write clock frequency has changed its value as now it is a concatenated value of 16 bit which is larger than the 8 bit value. Now the 8 bit value is the correct/original value, and only to safely transfer the data using the FIFO it had to be read as 16 bit data in the read clock domain. As a result the original data got corrupted in this process of transferring the data from write clock domain to read clock domain. How this situation is taken care?

Regards
 
Last edited:

Why do you think that the concatenation of data equals the corruption of data?

As other posters have noted, this sort of thing is done all the time. As the designer of the system, you would have chosen the FIFO you described for a good reason. For example, one assumes that the 8bit data is written into the FIFO such that byte 0 is written into the least significant byte, and byte 1 is written into the most significant byte, then in the read domain the circuitry can easily know which byte is which.

Alternatively, the write data may be 16-bit data that was broken up into bytes to be sent, then the concatenation has restored the data rather than corrupting it.

If you have chosen this FIFO and designed the system, the 16-bit data will be no mystery to you. As long as you have designed the FIFO being mindful of read and write frequencies and burst rates or have implemented flow control, then this design will work reliably.

r.b.
 

Alternatively, the write data may be 16-bit data that was broken up into bytes to be sent, then the concatenation has restored the data rather than corrupting it.

I agree in this case there is no corruption of data.

But suppose that is not the case that write data may be 16-bit data that was broken up into bytes to be sent. Suppose the FIFO also has designed properly taking care of of read and write frequencies and burst rates so that the it works reliably. But in my example you originating station us sensing data as 8 bit and receiving station is getting the data as 16 bit. Will not it be necessary in that case to separate these 16 bit data again into 8 bits in the receiving domain?
Otherwise the receiving domain will have to process the data of width 16 bit and that is wrong. This will result into corruption of data as receiving domain is processing the 16 bit wide data per clock cycle. Is not it? Can you or anybody provide how in practice this is taken care?

Regards
 

In every case there is no corruption of data.

Let's not forget. YOU are the designer of the system. You saw a need for packing two bytes into a 16-bit word. You designed the FIFO. You placed the bits of those two bytes into the FIFO in an order of your choosing.

On the read side, you are reading the exact 16-bit word you designed. You know the position and purpose of every bit, nybble, field or byte in that word. Therefore you will know exactly how to process it.

These FIFOs do not sprout up by accident, nor do they randomly rearrange bits, so nothing will ever be unknown to you.

r.b.
 

In every case there is no corruption of data.

Let's not forget. YOU are the designer of the system. You saw a need for packing two bytes into a 16-bit word. You designed the FIFO. You placed the bits of those two bytes into the FIFO in an order of your choosing.

On the read side, you are reading the exact 16-bit word you designed. You know the position and purpose of every bit, nybble, field or byte in that word. Therefore you will know exactly how to process it.

These FIFOs do not sprout up by accident, nor do they randomly rearrange bits, so nothing will ever be unknown to you.

r.b.

I understand your logic. But I want to know in this specific case as described by me how we should take care of this situation as the read side has 16 bit data. I want to know how it is taken care. An engineer if has solve such situation in practice will be able to answer how it is taken care in practice. The purpose of the thread is to know that.
CAN ANYBODY PLEASE REPLY TO MY QUESTION?

I agree rberek that I am the designed I should take care of the situation and design such a way so that no corruption happens. But I want to know how it is taken care in practice.
 

sun_ray, what is your REAL problem? You accept the fact that FIFO is doing EXACTLY as it is supposed to: You write in two bytes, one at a time; you read out two bytes, two bytes at a time. THE BYTES DON'T CHANGE. Where is the problem?
 

sun_ray, what is your REAL problem? You accept the fact that FIFO is doing EXACTLY as it is supposed to: You write in two bytes, one at a time; you read out two bytes, two bytes at a time. THE BYTES DON'T CHANGE. Where is the problem?

I accept. But my problem is two bytes at a time will be processed by the read side design unit. So it is processing a date of two bytes width per clock cycle. But in write side the data is written as one bytes at a time. So it is leading to corruption of data. Read all my posts in this thread carefully to understand what the issue I am stating.

Regards
 

There's no corruption, surely you concede? Here is an example, where the sender wrote a total of 6 bytes (aa,bb,cc,dd,ee,ff in hexadecimal) at time=0,1,2,3,4,5:

Code:
time Write Read
0       aa      -
1       bb      -
2       cc      -
3       dd      -
4       ee      aabb
5       ff       ccdd
6       -        eeff

Where is the corruption? Can you draw a diagram like this to explain?
 

You might think you are stating your question very clearly, but trust me , you are not.

Most FIFOs make use of a not empty/full signal set, or mutually readable fill_count, to indicate to the read side that the FIFO contains valid data, and to the write side that the FIFO is not full. So in your example, the write side would increment the fill count when it had filled the 16-bit word. At that point, the read side would see that the FIFO is not empty and read the data. The action of reading the data decrements the fill count. As long as the fill count is not at maximum, or some other threshold, the write side can continue. As long as it is not zero, the read side can read.

Furthermore, if the read side is much faster than the write side, you will never fill your FIFO and thus this transfer can happily continue forever. If the read side is much slower than the write, then you will have to implement flow control, or if the data is bursty in a predictable way, you can size the FIFO accordingly. If neither is true/possible, you are screwed.

Again, I don't know if this is the missing info. If not I am at a loss as to what you mean.

r.b.
 

I accept. But my problem is two bytes at a time will be processed by the read side design unit. So it is processing a date of two bytes width per clock cycle. But in write side the data is written as one bytes at a time. So it is leading to corruption of data. Read all my posts in this thread carefully to understand what the issue I am stating.

Regards

sun_ray, we have all 'read your posts carefully' and everyone here, with the exception of you, agrees that there is no corruption. Are you assuming your data is 'corrupt' because it is 2 bytes wide instead of one? If so, why are you reading 2-byte-wide data?

Perhaps you need to start over and re-state your problem.
 

And include a very specific example of the corruption you are concerned about.

r.b.
 

Are you assuming your data is 'corrupt' because it is 2 bytes wide instead of one?

Yes, I am stating because it is 2 bytes wide instead of one. I am stating it is corrupted because this two bytes together forms a new data which is different from the corresponding write side data. Please, see the example below where you see that aabb is read in first clock cycle instead of aa.

If so, why are you reading 2-byte-wide data?
The reason two bytes are being read is only to safely transfer the data using FIFO. It is mandatory to read 2 bytes to safely transfer the data. The data could not be read as eight bits for using the FIFO to transfer. I would have been happy if data could be read as eight bits. How is this situation taken care in practice?

- - - Updated - - -

There's no corruption, surely you concede? Here is an example, where the sender wrote a total of 6 bytes (aa,bb,cc,dd,ee,ff in hexadecimal) at time=0,1,2,3,4,5:

Code:
time Write Read
0       aa      -
1       bb      -
2       cc      -
3       dd      -
4       ee      aabb
5       ff       ccdd
6       -        eeff

Where is the corruption? Can you draw a diagram like this to explain?
So In write side you are getting the data as aa in first clock cycle, bb in second clock cycle like this. So in read side you are not getting 'aa' as the data in the first clock cycle, but you are getting 'aabb' as the data in the first clock cycle. So you are reading aabb as the data and not the correct data aa in first clock cycle. In the read side you are reading as aabb, ccdd, eeff in subsequent clock cycles instead of the correct data sequence aa, bb, cc in subsequent clock cycles. This is how corruption of data is happening.

Regards
 
Last edited:

And your example shows absolutely no corruption. For some reason you still seem to think that concatenation == corruption. It does not. Or the word corruption does mean what you think it means.

In your example, you send:

aa bb cc dd ee ff

And you receive:

aabb ccdd eeff

All the same bytes, just grouped differently. Grouped the way YOU chose, by the way. You packed the bytes so you know how to unpack them. And you packed them that way for a reason, so you must want to use aabb etc,

Even if you don't, just split the read word up into two bytes. Use word[15:8] as one byte and word[7:0] as the other. Voila, you have your bytes back.

Or, just use a symmetric 8-bit wide FIFO if you don't want bytes grouped into 16 bit words.

I don't know what else to say.

r.b.
 

Are you translating from Chinese to English? Then back to Chinese, then to German and then to English again? This would explain things.

As has been stated a few times now, concatenation (aa bb ==> aabb) is NOT corruption. Too bad if aabb is not what you want, but in the accepted sense that's not "corruption".

Given that no-one understands the problem you seem to be having, can you post example code of what you mean?

Also:

The reason two bytes are being read is only to safely transfer the data using FIFO. It is mandatory to read 2 bytes to safely transfer the data. The data could not be read as eight bits for using the FIFO to transfer. I would have been happy if data could be read as eight bits. How is this situation taken care in practice?

Why is that mandatory? Because of the FIFO you (or your coworkers) choose to use? Because there are plenty of FIFOs in use where you can read just 1 byte at a time "to safely transfer the data".

Anyways, a clear example would be nice. And maybe an explanation of why you think your previous "corruption" example is so corrupt.
 

And you packed them that way for a reason, so you must want to use aabb etc,

In the above posts I mentioned many times that I did not want to pack them. It had to be packed to transfer the data from write domain to read domain. I do not want to use aabb. I want to use aa in first clock cycle and bb in next clock cycle.

You packed the bytes so you know how to unpack them.

Can you provide a way to unpack them?

Even if you don't, just split the read word up into two bytes. Use word[15:8] as one byte and word[7:0] as the other. Voila, you have your bytes back.

In the read clock cycle you will still read word[15:8] and word[7:0] together. So you will read aabb in the first clock cycle.


Let me clarify the issue more. Suppose the data that is being written in FIFO is a movie which will consist of many images. The movie is processed in a digital system (named System_A) running at a clock named wr_clk. Now there is a need to process some portions of this movie in another digital system (System_B) which can increase the display capability and this digital system runs at a clock named rd_clk. So we transferred the data from the System_A to System_B by using a FIFO and it resulted in the output being aabb, ccdd instead of aa, bb, cc, dd. Now in write side first data is aa (let it be the image of a finger of the palm) and bb (let it be the image of a leg of a man). So when aabb is being processed in System_B it is processing an unwanted image containing finger and leg together. We want the finger to be processed first and then the leg to be processed next in System_B. This is corrupting the image as per clock cycle you are getting aabb, ccdd and they are being processed in System_B.

Regards
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top