matrixofdynamism
Advanced Member level 2

BCD takes more bits to represent the same quantity that can be represented with much less memory using simple binary. However, I have come to know that BCD has certain advantages which cause it to be used in most calculators and finance applications. It has something to do with BCD representing numbers similar to radix 10.
Are BCD really superior to normal binary? Why? If so then why do we have normal binary arithmatic units in hardware and not binary coded decimal arithmatic units?
Are BCD really superior to normal binary? Why? If so then why do we have normal binary arithmatic units in hardware and not binary coded decimal arithmatic units?