Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Fixed point Math for FFT, DSP in FPGA

Status
Not open for further replies.

GhostInABox

Junior Member level 2
Joined
Sep 3, 2009
Messages
22
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,281
Activity points
1,543
Hi all,

I have done quite a bit of searching on the net and a bit of VHDL coding to model and learn how to use fixed point numbers . I understand the basic concepts ( or do I ? ) . But at a recent interview got my ass handed to me by the interviewer , so i am thinking that i really need to understand from start to finish about using fixed point numbers and maybe how they are used in a few places.


Basically what i need to understand ( with practice intuitively) is in a such a system to extract the correct bit sequence that make sense after all the math operations are done on the input. I need to know the Q format , sum , add . multiply , division , precision , rounding issues etc . Basically all that is needed to successfully design a fixed point algorithm.

Is there some definitive book or resource that anyone would like to share with me that they have found useful. I do have Matlab at my disposal, could anyone point to any material that i can use.


Thanks in advance
 

Fixed point numbers are commonly called integers (whole numbers). They are called fixed point because the decimal point is implicit always at the rightmost position.

Fixed point numbers are most useful in counting. What was the exact question asked in the interview?
 

Fixed point numbers are commonly called integers (whole numbers). They are called fixed point because the decimal point is implicit always at the rightmost position.

Fixed point numbers are most useful in counting. What was the exact question asked in the interview?

I have a working understanding of it. There were a lot of questions which is difficult to reiterate here but it goes to the heart of fixed point math implementation.
 

Fixed point numbers are commonly called integers (whole numbers). They are called fixed point because the decimal point is implicit always at the rightmost position.
Not particularly. Integer (0 fractional bits) is only a special case of fixed point (scaling factor = 1). General fixed point numbers have a specific number of fraction bits right of the decimal point and an associated implicit scaling factor. See https://en.wikipedia.org/wiki/Fixed-point_arithmetic
 
" Practical Considerations in Fixed-Point FIR Filter Implementations " by Randy Yates is a good introductory material.

in that material, numbers are scaled by power of 2, like 0.7 * 2^(N-1).
But I do see posts online saying the scaled factor is (2^(N-1) - 1). I find hard to keep track of the fractional point after scaling by (2^(N-1) - 1)
It seems to me with scaling factor 2^(N-1) - 1, the scaled number no longer follow the Q-format(QM.N).
 
http://www.xilinx.com/support/answers/5366.html described here

and in one of my posts in which you also engaged...
https://www.edaboard.com/threads/354312/#post1515959

take a binary sequence 011 for example, the imaginary point is at right of MSB.
The actual value is 0.75, after scaling, the integer value is 3. Thus the scaling factor is 2^2, 4.
it's convinient to view the sequence as 0.11

but if the actual value(be it 0.75, or binary form 0.11)is scaled 2^(N-1) - 1, where does the binary fall between?
 
Last edited by a moderator:

Look sharp. The link isn't talking about fixed point scaling factors, it's calculating maximum (positive) value in a specific fixed point format. Than it's normalizing FIR coefficients and scaling it with an arbitrary factor so that they utilize the number range completely.

The method is problem specific because the author assumes that FIR coefficients can be treated as relative magnitudes, it has nothing to do with general properties of fixed point numbers.

By the way, if the normalized FIR coefficients have a maximum value of -1 but no +1, the arbitrary factor could be 2^n as well.
 

Hi,

Many years back, when I started some DSP experience I was told all numbers represent the values: -1 (including) ...0 ...+1 (excluding)
It is independent if bit count.
I think this is what (2^(N-1) - 1) means.
(Is it called "0.N" ... with sign?)

This has one advantage: multiplications don't change the scaling factor.
And software can be unique.

Otherwise, for example if you have a 4.12 representation and you multiply it with a 4.12 value, then the result is 8.24.
The decimal point is not constant, neither from right side, nor from left side.

0.16 x 0.16 gives a 0.32. It is constant zero from left side. You may truncate right 16 bits (= use only upper two bytes of result) to get again 0.16. You have to consider what resolution you need..

Klaus
 

Many years back, when I started some DSP experience I was told all numbers represent the values: -1 (including) ...0 ...+1 (excluding)
It is independent if bit count.
I think this is what (2^(N-1) - 1) means.
It does not. It's an arbitrary scaling of constants including +1, not intended to be compatible with general fixed point numbers. E.g. if you have real constants in the range -1..+1 and want it to fit Q0.9 format exactly. You don't care for the absolute value, just the constant ratio, you can multiply it with an arbitrary factor 511/512 and get -0.998...+0.998 range.
 

It's an arbitrary scaling of constants including +1, not intended to be compatible with general fixed point numbers....You don't care for the absolute value, just the constant ratio
Thanks for pointing this out. it really confuses me for a long time.

Using QM.N format helps me decide which bits to throw away when bit widths are more than wanted. I could truncate some fractional btis, and some bits that are just sign expansion.

If using this 2^(n-1) - 1 method, it seems all the numbers now are INTEGERS and there's no way for me to ditch the lower bits.
So with this method, the problem is to avoid overflow, not resolution(because the lower bits are always retained)?
 
Last edited:

The "avoid overflow" idea behind the 2^n - 1 scaling is only one of several points to be considered when scaling FIR coefficients. An opposite consideration says that coefficient rounding involves inaccuracies anyway and it may be more relevant to optimize the scaling of the total coefficient set according to specific quality criteria. E.g. in case of a low pass filter, you are interested that the sum of coefficients is equal 1.0*2^n for zero DC error.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top