Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[General] 16 Bit Binary To BCD Conversion Math!

Status
Not open for further replies.

PRATIK.PATI

Newbie level 6
Joined
Jun 7, 2017
Messages
11
Helped
1
Reputation
2
Reaction score
1
Trophy points
3
Location
Bangalore
Activity points
95
Convert the input number in 16-Bit Binary to BCD and show it on the LCD(8051).

Somehow I do have the assembly code from the internet which is working fine but i do not understand the math behind that. Also i found nothing regarding the math behind 16-Bit Binary to BCD on the internet.

I found a link on how to convert 8-Bit Binary to BCD and it says the 16 bit and 32 bit to BCD conversion follow the same pattern. Although its easy to understand the 8-Bit binary to BCD conversion I found it quite difficult to find the pattern for the 16-Bit and 32-Bit binary numbers.

Link- https://www.eng.utah.edu/~nmcdonal/Tutorials/BCDTutorial/BCDConversion.html

Kindly help me understand the math behind the conversion.
Thank You!
 

Find a more verbose explanation of the so called "double dabble" algorithm here, including a C implementation. https://en.wikipedia.org/wiki/Double_dabble
It works for any word width.

Although its easy to understand the 8-Bit binary to BCD conversion I found it quite difficult to find the pattern for the 16-Bit and 32-Bit binary numbers.
I guess you didn't actually understand it. Otherwise you won't ask.
 
Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top