"value is -1 as signed" is an expected output, since 255 is -1 as considered sign number.
why i can’t see “255 as unsigned”? Why 4294967295 printed on the screen?
How can i achieve this?
Thanks
Because when you type cast it to unsigned int, it is converting it to maximum range.
signed char range is -128 to +127, or unsigned char is 0 to 255. Same for unsigned int range 0 to 2^32. So it is converting it to highest value i.e 2^32.
try this
char c=250
try to convert this to unsigned int you will get (2^32)-5.
I don't understand why the compiler in pc would behave like this, in avrgcc for example it works fine and the value is the same (100 or 0x64 or 0x0064) and for negative number (-100 or 0x9C or 0xFF9C)
Code:
volatile char c= 100;
volatile int cc;
cc=(unsigned int)c // or cc=c
the result is also the same for
Code:
volatile signed char c= 100;
volatile signed int cc;
cc=(signed int)c // or cc=c
or
Code:
volatile signed char c= -100;
volatile signed int cc;
cc=(signed int)c // or cc=c
Hi yanamaddinaveen;
Actually it comes to me strange, in PC it just do "1" padding to the left of the number to change it to unsigned.
As you said, it seems;
ie 250 as char (in bin) 11111010;
250 as uint (in bin) 11111111111111111111111111111010, when do type cast;
But it gives clue to me;
I first type cast char to unsigned char, and then unsigned char to unsigned int. Then it worked!
If this is the case then the char "a" variable in your case is considered a signed char and that is why you get an error.
My IDE has a setting and char means unsigned char but I suppose that the default behavior of GCC is char to mean signed char.
Hi;
Yes alexan_e;
As i define "a" as uchar then no problem. I also tried like that later on.
But in may case i receive "char" return type from a library. That is way i want to try type cast char to uint.