#### What's all this stuff about ISI and RLL? Anyhow?

by

, 29th October 2014 at 12:10 (5478 Views)
Coding Spacing is commonly generally defined by the range between data transitions to prevent DC content of data. Bi-Phase which is self -clocking but reliable and used in early Telephony 1.544Mb/s repeaters then tri-level coding was used. Similar starts in ethernet coding were used.

Magnetic media , AC coding with the maximum bit density required the lowest transition rate in a binary transition coding method in early days was called MFM which put transitions on ones "1" and in between zeroes, "0" Later in the mid-80's this was expanded to long patterns called Run Length Limited , or RLL coding.

There are many types with (min,max) symbol stretching or lowering the bandwidth for a given bit rate so that data edges are less affected by the channel's frequency pattern-dependent delay or phase shift.

This is always pattern dependant from 001001... 0101 to 00110011... to 000000010000000... for worst case patterns and each RLL code has several different worst case patterns.

We get a maximum symbol rate for a given SNR and BER from Shannon's Theorem. If this does not meet the bit rate requirements, then some coding compression is used such as n-phase and/or n-levels amplitude and/or n-pattern sequence using RLL.

RLL uses digital sequential logic to convert 3 or more sequential bits to lower the bandwidth and eliminate DC. Just imagine some sequential logic pattern is used for now to encode/decode the conversion from bits to Symbols to lower the range in transitions.

E.g. NRZ would be (1,∞) and RLL could be (1,3) or (2,7) to double the minimum symbol transition interval and reduce ISI.

The wider the bandwidth of the channel, the more risk of filter phase shift and data edge jitter for different frequency patterns such the binary pattern, 0101...0011...000111...000000100000.

When filter jitter or ISI is combined with noise, the bit error rate drastically reduces.

Another source of error is comparator asymmetry.

Worst case data patterns proved useful for mapping media defects at the factory and the field. The same technology is now used in multi-level flash RAM to compress more bits per cell and reduce the noise of pattern-dependant adjacent cll cross-talk which requires more patterns to validate error free.