I have simple subsystem consisting of 4 digit pushwheel switches, FPGA and a 16 bit DAC. DAC maximum output is 5V corresponding to 65535 counts.
When I set 5000 through push wheel switches, the output of DAC must read 5.000V. To do this I have to use FPGA and not a microcontroller. FPGA must convert BDC digits 5000 (20480-decimal) to 65535 for proper display. Please provide the method to do so.
To implement this in an FPGA the values should be converted to fixed point binary first. You would convert the BCD input into a binary value and multiply it by the fixed point binary scale factor.
Hi Barry,
How do I scale value by 3.2 in FPGA? Anyhow I came up with the following roundabout solution
1. Convert BCD 5000 to binary 5000
2. Scaling factor is now 65535/5000= 13.107. So I multiply 5000 * 1307 = 65535000
3. Convert 65535000 to BCD and then knock off the last 12 digits to get back 65535.
Didn't you note your above logic returns the original value 65535 = ((65535/5000)*5000) ?
Didn't you note you missed one digit "1" along the above step ?
Didn't you note you missed the decimal point ?
Whenever you scale a float point value with a platform which do not provide native support for float point opperations, consider using the multiply-shift approach, which has the drawback of adding an intrinsic error, so it should be checked if meets your accuracy requirement, for example ( in pseudo-code ) :
However, since we should use only integers, 819,2 should be replaced by 819, adding an overal math error of 0,02% (=(819,2-819)/819,2)), pretty acceptable IMO for most applications.
Your proposed method does not work for other values. For example BCD value of 2500 corresponds to 9472 in decimal. Now 9472*819/256 yields 30303. DAC output is 2.311 volts . This is a huge error.
You completely missed the point, review step-by-step the proposed method and the quoted question for what it refers to.
The key point on this approach is the replacement of a division opperation by a shift-right opperation.
BTW, according to your last reasoning, the scale factor is being used twice: 30303 = 2500 * 3.2 * 819/256.
Keep in mind that power-of-two numers at the denominator (Y/2n), return the same value that a right-shift (Y>>n) would do.
Your mistake lies in BCD to decimal conversion: (edit: corrected values)
"BCD" value of "5000" = 0101 0000 0000 0000 (bin) should be formed into 5000 decimal = 1388 (hex) = 0001 0011 1000 1000 (bin)
"BCD" value of "2500" = 0010 0101 0000 0000 (bin) should be formed into 2500 decimal = 09C4 (hex) = 0000 1001 1100 0100 (bin)
BCD is already a "decimal" representation, thus it already looks identical: "1234" BCD = 1234 decimal.
But binary (hex) representations look different.