What is "word" of a CPU

12

In my Operating Systems class the teacher quoted a term that left me a bit confused, which is word of a CPU (Central Processing Unit) and he did not go into the explanation of this term, said that it may have different sizes in relation to the bits.

Question

I would like to know what is word and what relationship does it have with the CPU?

    
asked by anonymous 21.02.2017 / 00:09

2 answers

11

Initial setting

Word is a natural data of an architecture (processor).

As in natural human language we have the letter as the smallest given, the syllable as the first grouping of the given minor and then the word coming next in the quantities, in the computer we have the bit as given , and the smallest grouping the byte (okay, may not be very well), and then we have the word. But in the language the words vary in size, currently all computer architectures have words with the same number of syllables (bytes) and as the syllables are also fixed, we have the same number of letters (bits).

When we speak of a word we are talking about a data that has a fixed size / length / bit width that architecture works best.

In general we are talking about the size of the processor register. At least from the main registrars. There may be other secondary ones for specific activities, such as floating point calculation, vectors, encryption, etc.

Sizes

It can range from 1 bit (rare) to 512 (rare, may be higher in the future). The most common today is size 64. 32 is also quite common. On small devices 16 or 8 there is still room. Nothing prevents you from having broken numbers, you do not have to be only a power of 2, although it is the most common.

It is common, but not mandatory, that the word also determine the size of the theoretical maximum memory address. If the largest possible address has 32 bits, it is better for the processor to have a register with a 32-bit word so that the pointer fits into the register and can be accessed simply and quickly. Older architectures and some very simple ones (embedded devices) may require more than one registrar to handle addresses. An architecture that needs accurate calculations may have a register that is larger than the largest possible address (eg 64 word and 32 address).

In general this is the size that the processor works best with numbers. Eventually a smaller number may be as efficient, but there are cases that there is more consumption to do alignment. A larger number will require more than one register, and it is more complicated for the processor to deal with it, it is slower and generally loses the atomicity of the operation.

There are architectures that use the word as a measure for data transfers, but again it's just not a coincidence because it can simplify some operation.

Another point is that the size of the statement tends to be the word size, at least in RISC architectures. This has happened more in the past, today instruction tends to be smaller, at least in architectures with large words.

Memory allocations are often multiples of the size of a word.

There are architectures that can have variations in word size. In Intel x86 that started 16 bits, then passed to 32 bits and now is 64 bits can have these 3 word sizes, respectively called WORD, DWORD, QWORD.

In the past a word tried to be equal to the size of the character, but that does not make sense anymore.

Table of various known architectures .

    
21.02.2017 / 00:11
0

Processors respond to program commands (or, by extension, programmer) through the machine language, in the form of binary numbers, which represent 0 = 0 Volts and 1 = 5 Volts , for example. This language is nothing more than the interpretation of a "table" of instructions where each instruction ("opcode") has a task to execute inside the processor.

These "opcodes" or instructions are stored in program memory (ROM or RAM) and the processor will read, decode and execute sequentially one by one.

The entire sequence of events within the microprocessor chip, from the time the system is energized, is controlled by the clock, which sends pulses to the electronic components arranged to form a complex state machine. Each 0 and 1 stored electronically in program memory initializes and starts this state machine, giving instructions for the next state.

It usually takes several clock cycles to completely satisfy (or stabilize) the system, depending on the type of "instruction" it has been powered on.

The amount of instruction desired by the system designer will determine the minimum number of bits (zeros and ones) required to complete the set of these instructions. So with 1 bit we only have 2 states or possible instructions. With 2 bits, 4 instructions (00, 01, 10, 11). With 4 bits, 16 instructions, and so on.

This amount is the word of the processor.

But does it mean that with 64 bits more than 18,000,000,000,000,000,000 instructions are possible?

Yes, but to better understand why this word is so great, let's continue ...

The operation with each instruction is usually done in two steps: search (fetch), where the instruction is transferred from the memory to the instruction decoder circuit; and execution itself. See Instruction Cycle .

Taking the 8-bit 8085 microprocessor as an example, the fastest, usually one-byte, instructions are executed in four clock cycles, the slowest, those in which the processor needs seek in memory two more bytes of data, in up to 16 cycles. In all, this processor has 74 instructions and the clock reaches a maximum of 5 MHz .

As we can see, the old processors were not very effective with regard to the processing time of instructions. Higher performance can be achieved by: increasing clock frequency (clock), having physical (electrical and magnetic) limitations of buses (interconnections); by the increase in the number of external bits , which also have limitations on physical space; by the reduction of the number of cycles to execute each instruction , which is currently done by concatenating instruction fetch cycles with decoding and / or by using cache memory; by parallel execution of instructions or multiprocessing or, finally, by the increase in the number of internally processed bits , ie the ALU (Logical and Arithmetic Unit) the accumulator (s) with larger capacity: 16 bits, 32 bits, 64 bits ...

Reviewing the history of microprocessors, the first, 4004 from Intel, had a 4-bit word. The instructions were divided into two nibbles, that is, 4 bits or a half byte: the first was the "opcode", the second was the modifier. Two more nibbles could compose the address or larger instruction data. See the PDF manual for this chip at 4004 datasheet . Although it has instructions equivalent to an 8-bit processor, it could only perform calculations (it was designed for calculator!) Directly with no more than 4 bits.

Nowadays, processors do not decode instructions anymore only through physical logic devices, but by microprograms, and use the most advanced and complex architecture.

Inside each opcode is much more information embedded than those old instructions. In addition, the processor, in addition, each of the various processors is able to manipulate and perform calculations with numbers with many more digits and decimals, for greater efficiency.

    
06.05.2018 / 17:01