The bit rate is 1.0 megabit per second (1 bit per μs). The combined accuracy and long-term stability of the bit rate is only specified to be within ±0.1%; the short-term clock stability must be within ±0.01%
I am trying to find a way to handle this accuracy in RTL but I don't understand well this requirement.
Here how I see it:
1 Mb/s +- 0.1% (worst case) => 1 bit each 1 us +- 1 ns
Here what I did:
I choose to sample the data with a clock @ 8 MHz, and I tolerated 1 clk cycle (125 ns)
This means that I will consider a signal of 1 us +- 125 as valid.
For example 1 us -125 = 875 ns but this is outside the 0.1 % (1 ns) required by the standard
Forgive me, this concerns the analog part of the system but I am only designing the RTL code.
Does that requirement have an impact on the digital decoder?
Another difficulty I am having is finding a good paper on 1553 transceivers (in order to see the transformation of the analog signal to the digital format and the tolerance applied to it).
I would interpret that as the accuracy required when transmitting a signal. I didn't study the standard enough to determine if there is a required phase relationship between the transmit and recovered clock (in your case a oversampling clock).
A Software based RTL decoder by oversampling 8x would compromise the error rate in noise conditions. Phase noise can also occur with transmission line group delay distortion to the Eye Pattern.
Sorry but I cant help you with RTL code except that you would be better off with an analog discriminator and FIFO if you cannot run synchronously to the incoming synchronous data.