size of int in Microcontroller 8051

Status
Not open for further replies.

jayanthyk192

Full Member level 3
Joined
Sep 17, 2010
Messages
179
Helped
1
Reputation
2
Reaction score
1
Trophy points
1,298
Activity points
2,580
Hi,

can anyone please tell me the size of int in 8bit microcontroller(8051)?can you also please tell me the range also?

thank you.
 

if you can use printf() you could try
Code:
  printf("size of inr %d\n", (int) (sizeof) int);
 

hi

normally the size of int is 16bit

ml
 

If uC core is 8 bit ( the case of 8051 ), int size is specified to that value.

+++
 

the C (and C++) standards only specify minimum numeric ranges, i.e.
short int signed integer minimum range -32768 to 32767
unsigned short int unsigned integer minimum range 0 to 65535
int signed integer minimum range -32768 to 32767
unsigned int unsigned integer minimum range 0 to 65535
long int signed integer minimum range -2147483648 to 2147483647
unsigned long int unsigned integer minimum range 0 to 4294967295


I have come across plenty of systems where int is 32 bits
 

hi
and also depends upon the compiler what you use
in some compilers they use int8 and int16
but in ANSI standard int is 16 bit

ml
 

See that :
C Language Keywords

"...Variables of type int are one machine-type word in length..."

It implies that int size depend to witch hardware platform ( i.e. core processor ) compiler is running.
The case of 8051 is 8bit

+++
 

It can depend on the compiler. For example, on PCs the Borland Turbo C/C++ Version 2 and 3 int size is 16 bits whereas gcc it is 32 bits. Can cause major problems when transmitting binary data between systems (e.g. using TCP or UDP packets) - one has to do conversions if the sizes are different - also may have to swop bytes if one system is little endian and the other big endian. Jave gets over the problem by specifing int as 32 bit.
 

As horace1 pointed, this is [only] compiler dependent. There's nothing in hardware tying a particular uC to a specific size of int.

That being said, all 8-bit uC of any architecture and flavor and on all compilers I have ever used (there are a few of each) have had a 16-bit int (-32768 to 32767).
I'd be prepared to say that the uncertainty around this issue is only theoretical and you can rely on the int being 2-bytes on a 8-bit compiler, but it's good practice to define user types of known sizes and only use those types in embedded (I only use U1, U2, U4, S1, S2, S4 for unsigned and signed integers of 1, 2 and 4 bytes, respectively).

In this case, if you'll ever port your code on a wider (bit-wise) platform, you'll only need to change the type definitions in one single place...

Arthur

PS: btw., type ranges can be probed and tested for by including "limits.h" and using the macros defined there (such as INT_MIN and INT_MAX, for instance).
 
Last edited:

Thank you all for the replies.

i just tried a program in which i initialized the delay time to be 30,000 in INT and the delay is working properly.i verified it by changing the delay from 200 to 1000 to 10,000,to 30000.it worked fine.does it imply int is not 8bits but more?
 

must be a minium of 16bit (maximum 32767) - try changing it to 40000 and see what it says - it that fails change it to unsigned int (maximum 65535 for 16bit)
 

must be a minium of 16bit (maximum 32767) - try changing it to 40000 and see what it says - it that fails change it to unsigned int (maximum 65535 for 16bit)

i'm using an led to know the time delay.i kept the delay to 35,000 the led blinked but at a constant rate after 35,000.then it must the limit you specified-32767.unsigned.
 

Status
Not open for further replies.

Similar threads

Cookies are required to use this site. You must accept them to continue using the site. Learn more…