ionp
Member level 3

I have to meet a poorly defined requirement for "12 bit resolution" on a power measurement, from one voltage and one current sample which will be multiplied together. I don't think I can get away with averaging to meet the requirement.
I would like to use the 10 bit A/D converters built into the micro I am using, but am having a hard time coming up with an iron-clad rational that it will meet the goal without resorting to specsmanship tactics. For example, if I defined resolution as the largest value that could be represented divided by the smallest value, and then converted to binary, I could make a case that I have 20 bit resolution but this seems deceptive because there will be missing codes in the result.
Does anyone have guidance in this area, or perhaps have an internationally recognized standard that could help me make or break my case?
Thanks, all.
I would like to use the 10 bit A/D converters built into the micro I am using, but am having a hard time coming up with an iron-clad rational that it will meet the goal without resorting to specsmanship tactics. For example, if I defined resolution as the largest value that could be represented divided by the smallest value, and then converted to binary, I could make a case that I have 20 bit resolution but this seems deceptive because there will be missing codes in the result.
Does anyone have guidance in this area, or perhaps have an internationally recognized standard that could help me make or break my case?
Thanks, all.