Can any one please explain following codes step by step that I can not understand
Code:
unsigned short myBcd2Dec(unsigned short bcd){
return ((bcd >> 4)*10+(bcd & 0x0F));
}
The above routine takes a packed BCD and returns the equivalent value as an integral type.
I will assume an "unsigned short" has a size of one byte, although without knowing the exact compiler this is unknowable.
Code:
unsigned short BCD1 = 0x73;
unsigned short Result;
Result = myBcd2Dec(BCD1); // Result = 0x49 or 73
The BCD1 variable contains a Packed Binary Coded Decimal value of 73:
The most significant four bits (nibble) contains the integer 7 and the least significant four bits (nibble) contains the integer 3.
It can be easily seen that the Packed BCD value represents the integer 73.
Let's consider the return statement:
Code:
((bcd >> 4)*10 + (bcd & 0x0F))
First:
Masks the lower nibble resulting in 0000 0011 or 3.
Second:
Right Shifts the upper nibble into the lower nibble resulting in 0000 0111 or 7, which is then multiplied by 10 resulting in 70, 0x46 or 0100 0110.
Finally:
Code:
((bcd >> 4)*10 [COLOR="#FF0000"]+[/COLOR] (bcd & 0x0F))
The two results 70 and 3 are added together resulting in 73, 0x49 or 0100 1001 which is the value returned by the routine.
Does the above discussion answer your question?
What are the requirements of your projects task? Are you dealing with Packed BCD or Unpacked BCD? What type format do you need the BCD converted?
BigDog