Thanks for your great help, FvM.
Now I am able to shift the bits now. Here I want to ask some questions regarding floating point algorithm for binary numbers. I hope you can guide me again by your profound learning.
Currently, I am working on a prediction circuit design for ADC. Some of binary algorithms are involved.
Consider this example. Take A negative: A= 1100 1010. Now to divide by 16 we do right shift again (but we also have to replicate the sign bit in the left as you said). So now A/16 = 1111 1100. So base on my design, the next operation is to subtract: A-A/16=1100 1010 - 1111 1100. Now if you noted before we are throwing the 4 least significant bits, which is rounding the result division to nearest integer. If you don't want this then you can consider 4 bits associated to 2^{-1}, 2^{-2}, 2^{-3} and 2^{-4} as well, and we need 12-bit subtraction. In the examples given above you will do the following:
example 1: 0110 1101.0000 - 0000 0110.1101
example 2: 1100 1010.0000 - 1111 1100.1010
Thanks for you patience to read my long crap description, how to implement floating point in hardware practice? I have really no idea how to deal with those decimal points binary number…