Quantcast

Central Processing Unit

The Central Processing Unit (CPU) is the processor component of a computer system that carries out the instructions of a program. It is the primary element carrying out the computer’s functions and the unit that reads and executes basic program instructions. The form, design, and execution of CPUs have radically changed since the earliest examples, but their elementary operation remains very similar.

Some of the first CPUs were custom-designed as a part of a larger computer. However, this expensive method of designing custom CPUs for a particular application has led to the development of mass-produced processors that are made for several purposes. Both the efficiency and consistency of CPUs have made other digital devices present in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from cars to cell phones and children’s toys.

The idea of a program computer was already present but not yet carried out. In 1945, the “First Draft of a Report on the EDVAC” was introduced by John von Neumann. It outlined the design of a stored-program computer that would be designed to perform a certain number of various instructions. These instructions could be put together to create useful programs for the EDVAC to run. With the programs written for EDVAC being stored in high-speed computer memory, limits were overcome. Now, the program or software that EDVAC ran could be altered simply by changing the contents of the computer’s memory. Today, many CPUs mirror von Neumann’s design, but elements of another Harvard architecture are commonly seen as well.

A CPU is digitally limited to a set of discrete states and requires switching elements to distinguish between and change states. Early on, electrical relays and vacuum tubes were commonly used as switching elements. While there were advantages such as speed, there were also disadvantages that had strong effects. Therefore, early computers were generally faster but less reliable than computers today.

CPUs became more complex as modern-day technology evolved, allowing the building of smaller and more reliable electronic devices, such as transistors. Transistorized CPUs no longer had to be built out of bulky and unreliable switching elements. Instead, they were built onto printed circuit boards containing individual components. Another device, called the integrated circuit, was soon created that allowed many transistors to be made on a single chip. Over time, transistors and chips were used to build entire CPUs. Even though they required thousands of pieces, they were still significantly smaller and more reliable than earlier designs. Transistor-based computers had many advantages, such as facilitating increased reliability and lower power consumption. They also allowed CPUs to operate at significantly faster because of shorter switching times. The introduction of the microprocessor in the 1970s also greatly affected the design and performance of CPUs. Since the introduction of the first commercially available microprocessor and the first widely used microprocessor, this class of CPUs has become predominate over all other central processing unit implementation methods.

The purpose of the fundamental operation of most CPUs is to perform a sequence of stored instructions called a program. There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and writeback. The first step, fetch, involves retrieving an instruction from program memory. After an instruction is fetched, the computer is incremented by the length of the instruction word in terms of memory units. Sometimes, the instruction needs to be taken from relatively slow memory. This causes the CPU to stall while waiting for the instruction to be returned. Caches and pipeline architectures have been implemented to help with this issue. In the decode step, the instruction is broken up into parts that are significant to other portions of the CPU. Usually, the opcode indicates which operation to perform. The remaining parts of the number usually provide information required for that instruction, such as operands for an addition operation. These operands can be given as a constant value or as a place to locate a value. Often times a microprogram is used to help with translating instructions into various configuration signals for the CPU. After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are connected so they can execute the desired operation. The final step, writeback, simply writes the results of the execute step to some form of memory, usually some internal CPU register. After the execution of the instruction and writeback of the results, the entire process repeats in sequence.

The way a CPU represents numbers is a design choice that affects the way the device functions. Early digital computers used an electrical model of the base ten numeral system. A few other computers have used numeral systems such as base three. Almost all modern CPUs represent numbers in binary form, with each digit being represented by a two-valued physical quantity. The size and precision of numbers that a CPU can represent is also important. In a binary CPU, a bit refers to one significant place in the numbers a CPU deals with. The number of bits a CPU uses to represent numbers is referred to as word size. The number may differ between architectures, and often within different parts of the very same CPU. Integer range can also affect the number of locations that the CPU can locate. Higher levels of integer range require more structures to handle the additional digits, and therefore more complexity, size, power usage, and expense.

Most CPUs are synchronous in nature, designed to operate on assumptions about a synchronization signal. This clock signal usually takes the form of a periodic square wave. An appropriate period for the clock signal can be made by calculating the maximum time that electrical signals can move through different branches of circuits. This period must be longer than the amount of time it takes for a signal to move. In setting the clock period to a value above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the edges of the clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it causes the CPU to have to wait on its slowest elements, even though its other elements are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism.

The performance and speed of a processor depends on the clock rate and the Instructions Per Clock (IPC), which together are the factors for the Instructions Per Second (IPS) that the CPU can perform Many reported IPS values have represented top completing rates on artificial instruction sequences with few branches. The performance of the memory hierarchy also affects processor performance; however, it can be increased by using multi-core processors, which is basically plugging two or more individual processors into one integrated circuit.

Photo Copyright and Credit

Central Processing Unit


comments powered by Disqus