Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Error checking/correction - Small packs

Status
Not open for further replies.

atferrari

Full Member level 4
Joined
Jun 29, 2004
Messages
237
Helped
7
Reputation
14
Reaction score
3
Trophy points
1,298
Location
Buenos Aires - Argentina
Activity points
1,996
I am linking two micros via IR, one way only, thus repetition can not be asked to the emitter.

I am using the Sony's SIRC protocol. Speed is the same used for remote controls for domestic appliances.

Whatever is received altered should be discarded or corrected (?).

I've been reading a lot starting with parity check, checksum, Hamming code, FEC, SECDEC and more. :cry:

My concern: what should be the balance between the small size of the message (one, eventually two packs with commands) and eventually robust but long and complex routines that could delay (?) a lot the process.

While I want to keep all simple, I also want it roubust enough.

It's just a robot but I want it running properly. Any rule of thumb?

Repeating the packs three or five times for majority voting, is it an option?

Changing the order and quantity of packs sent may be done any time but my problem is not there.

Sorry if I sound vague but have no previous experience.

Comments / suggestions anyone? Muchas gracias

Agustín Tomás
 

Hello again atferrari,

A while ago I experimented with a wireless digital audio system. Seeing as if was RF I needed some error correction (it was one-way comms as well) and some detection. I used [13.8] hamming codes, implemented in pure logic (sequentially, not in parallel, the XOR tree was massive). It can correct one error in a 13-bit packet, and detect 2. However, the constraints of the system were, latency <1ms, and it all had to be done in pure logic (CPLD).

The fact you have a PIC micro to play with, with all its memory/logic functions means you could easily go for a more complicated system. (compared to a 64 MC cpld, the PIC micro is far superior)

The 'hamming codes' are not fantastic in their correction capability, but the fact that they have error detection as well means you can monitor your link. For example, with the [8.4] hamming code, you can correct 1 in 8 errors (4 data bits + check bits) and you can detect 2 in 8, which is 1 in 4. Although it doubles the amount of data you send (and doubles the risk of an error, since you are sending more data) it can be very useful in experimenting with your link. Just setup a test circuit, with the maximum range of your link, and have a wire going from Rx to Tx, so the Tx sends out its data, via IR, the Rx collects it, and sends whatever its received (along with any errors) back to the transmitter. Then you could just XOR the two bytes to compare them. Actually, have a counter, that sends 0-255, and do this 255 times.
So the number of packets you send will be (65025). Increment a counter for every error you receive. Once its finished, just see how many errors you've got and work out the 'Bit Error Ratio' (BER is not 'bit error rate'). Then try this with a crude FEC algorithm. The Rx decides it before its sent back to the Tx, and the Tx again, compares this to what it sent. If the errors increase, then the FEC algorithm is doing more harm than good. But it should significantly reduce the errors, unless you got a real bad IR link.

Anyway, thats how I tested my link, and I got about 1 in 1000 bits in error, I don't think your link will be as bad as that though. I also left the test running for 2 hours (about 140 million packets sent) but the error counter overflowed.

Whatever is received altered should be discarded or corrected (?).
This really depends on what the data represents. After all, most error correcting alorithms have a limit, once there are more errors than they can detect, they try to correct them, and actually cause more errors. Sometimes discarding a packet is the only option. Having a two-way link, in IR isn't all that difficult. And a 'request re-send' is all the Rx would need to send to the Tx.

My concern: what should be the balance between the small size of the message (one, eventually two packs with commands) and eventually robust but long and complex routines that could delay (?) a lot the process.
This again, is another issue. You can speed up many FEC algorithms by using look-up tables. But these can take up loads of memory, seeing as your Rx is 'portable' (Robot) you probably don't want to add more memory to it, using up more power. Its a balance between, time (uP instructions), memory, and amount of data you send. One example of something that causes great delay is 'interleaving'. It's brilliant at spreading 'burst' errors, so they can be easily corrected, but say you interleave 8 packets. All 8 packets must be recieved before any of them can be decoded. So you have a delay of 8-packet periods.

One simple, but surprisingly effective method I've used is 'substitution'. Its not used much for error correction (mainly used to syncronise and remove DC-balance) but its worked for me.

At both the Tx and Rx you have a lookup table. I would start with a 4-bit to 8-bit substitution. That way, the table is only 16 entries, and you could encode bytes using nibbles, and have 2 bytes out (I have used this for manchester encoding as well). So, you can send 16 possible bytes. Pick your 16 with the greatest 'hamming distance'. So, 00011100, 11100001, 01100010 etc...
When your receiver gets the byte, it checks with its own look-up table. If there is no match then it searches for the closest one, and returns with the 4-bit nibble, and a number, telling you how many errors have occured. Unless you've got 4 or more errors in a byte (50% wrong) then it should be able to find the right bits.

Even though this has the same number of input and output bits as 8,4 hamming code, it seemed to me to be far more effective, at both detecting and correcting errors.

Repeating the packs three or five times for majority voting, is it an option?
This is an option, although you'll be sending 3x/5x the amount of data. And therefore increasing the chances of bit errors in the total 'superpacket'. Say a packet has 1/10 chances of error. This doesn't sound bad, but it really is, for important data. If you send one packet, you have 1/10 chances of error. Send 5 and you have 5/10. But as long as a majority of packets are the same, you can assume that it is correct. It really is a careful balance, and it may mean you're sending 4 extra packets, with no better chances of reducing errors.

Anyway, just to recap, my advice would be to do a test first, to see how many errors you get on average and what kind of errors they are (burst, or random single errors). This will tell you roughly what you need, no point in going over-kill if 1 in 3 million bits are in error.
And you could always try increasing the power of your Tx IR emitter :D
As for algorithms, its tricky, most FEC idea's are either not very good, and simple to do, or very good and incredibly complex (the viterbi algorithm is horrible, especially on a PIC :()

Good luck, and I'm always full of idea's, most of them useless, but just ask.

BuriedCode.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top