Pergunta

What do 16-bit, 32-bit and 64-bit architectures mean in case of Microprocessors and/or Operating Systems?

In case of Microprocessors, does it mean maximum size of General Purpose Registers or size of Integer or number of Address-lines or number of Data Bus lines or what?

What do we mean by saying "DOS is a 16-bit OS", "Windows in a 32-bit OS", etc...?

Foi útil?

Solução

The difference comes down to the bit width of an instruction set passed to a general purpose register for operating on. 16 bits can operate on 2 bytes, 64 on 8 bytes of instruction at a time. You can often increase throughput of a processor by executing more dense instructions per clock cycle.

Outras dicas

My original answer is below, if you want to understand the comments.

New Answer

As you say, there are a variety of measures. Luckily for many CPUs a lot of the measures are the same, so there is no confusion. Let's look at some data (Sorry for image upload, I couldn't see a good way to do a table in markdown). Table data

As you can see, many columns are good candidates. However, I would argue that the size of the general purpose registers (green) is the most commonly understood answer.

When a processor is very varied in size for different registers, it will often be described in more detail, eg the Motorola 68k being described as a 16/32bit chip.

Others have argued it is the instruction bus width (yellow) which also matches in the table. However, in today's world of pipelining I would argue this is a much less relevant measure for most applications than the size of the general purpose registers.


Original answer

Different people can mean different things, because as you say there are several measures. So for example someone talking about memory addressing might mean something different to someone talking about integer arithmetic. However, I'll try and define what i think is the common understanding.

My take is that for a CPU it means "The size of the typical register used for standard operations" or "the size of the data bus" (the two are normally equivalent).

I justify this with the following logic. The Z80 has an 8bit accumulator and an 8 bit databus, while having 16bit memory addressing registers (IX, IY, SP, PC), and a 16bit memory address bus. And the Z80 is called an 8bit microprocessor. This means people must normally mean the main integer arithmetic size, or databus size, not the memory addressing size.

It is not the size of instructions, as the Z80 (again) had 1,2 and 3 byte instructions, though of course the multi-byte were read in multiple reads. In the other direction, the 8086 is a 16bit microprocessor and can read 8 or 16bit instructions. So I would have to disagree with the answers that say it is instruction size.

For Operating systems, I would define it as "the code is compiled to run on a CPU of that size", so a 32bit OS has code compiled to run on a 32 bit CPU (as per the definition above).

How many bits a CPU "is", means what it's instruction word length is. On a 32 bit CPU, the word length of such instruction is 32 bit, meaning that this is the width what a CPU can handle as instructions or data, often resulting in a bus line with that width. For a similar reason, registers have the size of the CPU's word length, but you often have larger registers for different purposes.

Take the PDP-8 computer as an example. This was a 12 bit computer. Each instruction was 12 bit long. To handle data of the same width, the accumulator was also 12 bit. But what makes the 12-bit computer a 12 bit machine, was its instruction word length. It had twelve switches on the front panel with which it could be programmed, instruction by instruction.

This is a good example to break out of the 8/16/32 bit focus.

The bit count is also typically the size of the address bus. It therefore usually tells the maximum addressable memory.

There's a good explanation of this at Wikipedia:

In computer architecture, 32-bit integers, memory addresses, or other data units are those that are at most 32 bits (4 octets) wide. Also, 32-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size. 32-bit is also a term given to a generation of computers in which 32-bit processors were the norm.

Now let's talk about OS.

With OS-es, this is way less bound to the actual "bitty-ness" of the CPU, it usually reflects how opcodes are assembled (for which word length of the CPU) and how registers are adressed (you can't load a 32 bit value in a 16 bit register) and how memory is adressed. Think of it as the completed, compiled program. It is stored as binary instructions and has therefore to fit into the CPUs word length. Task-wise, it has to be able to address the whole memory, otherwise it couldn't do proper memory management.

But what come's down to it, is whether a program is 32 or 64 bit (an OS is essentially a program here) it how its binary instructions are stored and how registers and memory are addressed. All in all, this applies to all kinds of programs, not just OS-es. That's why you have programs compiled for 32 bit or 64 bit.

The definitions are marketing terms more than precise technical terms.

In fuzzy technical term they are more related to architecturally visible widths than any real implementation register or bus width. For instance the 68008 was classed as a 32-bit CPU, but had 16-bit registers in the silicon and only an 8-bit data bus and 20-odd address bits.

http://en.wikipedia.org/wiki/64-bit#64-bit_data_models the data models mean bitness for the language.

The "OS is x-bit" phrase usually means that the OS was written for x-bit cpu mode, that is, 64-bit Windows uses long mode on x86-64, where registers are 64 bits and address space is 64-bits large and there are other distinct differences from 32-bits mode, where typically registers are 32-bits wide and address space is 32-bits large. On x86 a major difference between 32 and 64 bits modes is presence of segmentation in 32-bits for historical compatibility.

Usually the OS is written with CPU bitness in mind, x86-64 being a notable example of decades of backwards compatibility - you can have everything from 16-bit real-mode programs through 32-bits protected-mode programs to 64-bits long-mode programs.

Plus there are different ways to virtualise, so your program may run as if in 32-bits mode, but in reality it is executed by a non-x86 core at all.

When we talk about 2^n bit architectures in computer science then we are basically talking about memory registers, address buses size or data buses size. The basic concept behind term of 2^n bit architecture is to signify that this here 2^n bit of data can be made use to address/transport the data of size 2^n by processes.

As far as I know, technically, it's the width of the integer pathways. I've heard of 16bit chips that have 32bit addressing. However, in reality, it is the address width. sizeof(void*) is 16bit on a 16bit chip, 32bit on a 32bit, and 64bit on a 64bit.

This leads to problems because C and C++ allow conversions between void* and integral types, and it's safe if the integral type is large enough (the same size as the pointer). This lead to all sorts of unsafe stuff in terms of

void* p = something;
int i = (int)p;

Which will horrifically crash and burn on 64bit code (works on 32bit) because void* is now twice as big as int.

In most languages, you have to work hard to care about the width of the system you're working on.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top