Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Why micro controller's are 8,16,32 bit why not 7,15,20 bit

Status
Not open for further replies.

chandu.kurapati

Full Member level 3
Full Member level 3
Joined
Oct 31, 2013
Messages
186
Helped
5
Reputation
10
Reaction score
5
Trophy points
1,298
Location
Bengaluru, India
Visit site
Activity points
2,778
Hi,

I have little bit confused about that why micro controllers or processors are 8 bit , 16 bit, and 32 bit like this, why not 7 bit or 15 bit.

which factors are considered to select the specific bit processors, what is the major criteria to select the processors or controllers.


any one can explain in depth.


Thanks & Regards,

Chandu.Kurapati.
 

I am not exactly sure, what these are digital devices which have only two states 0 or 1.
means two states, so this two is responsible for that i think.

2^2 = 4 bit processor
2^3 = 8 bit processor
2^4 = 16 bit processor
2^5 = 32 bit processor
I think its because these are digital devices having only two state.
This is my view, please correct me if i am wrong.
 
The first computers used many different bus widths but the reason the numbers we use today were chosen is they are the minimum (and multiples of the minimum) of the size needed to hold a single decimal digit. The numbers 0 - 9 require four bits so that is what was chosen. The remaining bit combinations we used to extend 0-9 to 0-F making hexadecimal. It was also convenient for the English alphabet which needed 7 bits to cater for all lower and upper case characters, using double 4-bits was economical as only 1 bit was unused and this was quickly adopted for graphics characters anyway.

As for why 8 bits became 16 became 32 became 64... it is the most sensible progression to make given that parts were made in standard 8 bit widths and handling data in odd numbered chunks is less code efficient.

Brian.
 
Simply because if you already have 8-bit wide memory, peripherals and bus drivers, it would make little sense to make new, smaller ones just to make up an unusual sized system. By using two 'adjacent' 8-bit busses you make a 16 bit one. The same applies each time you double the bus width.

From a software point of view, the instruction set of a processor is much easier to handle if the bus is a standard size. For example, you can split a 16-bit value into two 8-bit values in order to send it over a parallel port or serial port as two bytes. If you were to use (example) a 13-bit bus you would have to make provision to divide it into 5-bit and 8-bit parts.

This does not mean other bus sizes are not used, the 16F PIC microcontrollers for example have a 14-bit instruction bus but they use a different architecture that still allows 8-bit data to be handled efficiently.

Brian.
 

Another view is that early on the decimal/binary number representation in calculatores/computers was a 4-bit nibble and the text representation with all characters and special signs in 8-bit – a byte. Thus for number crunching and text processing(the favored tasks of the starting computer age) was best done 8-Bit(multiple of 4 or 8). The program instruction length can be and was often different, however if the data memory chips where already a multiple of 8, it made sense use the same organization as a standard.

Enjoy your design work!
 

I think that it's more efficient to use the whole memory cell and not wasting it. In a 8 bit cell almost every character from most lenguages can be saved. Most alphabets contain around 30 characters (5 bits would be OK), but then you need numbers, symbols, spaces, null characters and every alphabet should be compatible.
 

Hello,
It’s a good question, the olders of us will remember maybe that in the 1960’s, there were 15 or 30 bits computer such as Sperry Univac 1230.
At this time, the engineers used the octal notation for simplify binary numbers manipulation and therefore regrouped by 3 bits (instead of 4 in hexadecmal). Ascii code was not used but a 6 bits alphanumeric code (2 octal characters).
A very long time ago… :)
 

There must have been some method in the madness, even the latest incarnations of C++ still allow octal as a radix!

If only we had 16 fingers - life for programmers would be so much easier. :smile:

Brian.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top