Computer Architecture

History Lesson

Charles Babbage

A computer is a machine that can operate automatically, without any human help, by following a series of stored instructions called a program.

The first person to attempt this was a rather obsessive, notoriously grumpy English mathematician named Charles Babbage.

Many regard Babbage as the “father of the computer” because his machines had an input (a way of feeding in numbers), a memory (something to store these numbers while complex calculations were taking place), a processor (the number-cruncher that carried out the calculations), and an output (a printing mechanism)—the same basic components shared by all modern computers.

The basic model of a modern digital computer hasn’t changed much since Charles Babbage proposed his Analytical Engine in around 1830.

Jon von Neumann

Von Neumann Architecture c.1945

A more in depth analysis of the Von Neumann Architecture can be found here.

In the early days of computing, computers were built to carry out a very specific task – breaking secret wartime codes, for example. But if the computer then had to do another job, it literally had to be completely re-wired by hand. This could take weeks. There was no such thing as a ‘software update’ in those days!

In 1945 Jon Von Neumann was a scientist who had an idea of how to create a computer that was far easier to change, this is known as the Von Neumann Architecture.

In Von Neumann architecture, both the data and the program being run are stored in memory.

It is also known as a ‘stored program’ computer.

With this architecture, if a computer needs to run a different program, it can do so simply by loading the new program into memory.


The motherboard is a large printed circuit board, which has lots of chips, connectors and other electronics mounted on it.

The whole computer is built up around a motherboard, and it is the most important component in the PC.

Inside the computer, data is constantly being exchanged between the various devices. Most of the data exchange takes place on the motherboard itself, where all the components are connected to each other.

Data Bus, within the Motherboard

The image above illustrates how the CPU and Main Memory are connected by data buses. These are the high speed transfer routes through the motherboard.

In a computer, a bus is a transmission path on which signals are dropped off or picked up at every device attached to the line. Only devices addressed by the signals pay attention to them; the others discard the signals.

The motherboard contains the BIOS chip. This EEPROM chip, contains the Basic Input Output System software. It holds the information for you hardware and allows the computer to see and interact with itself. Once the BIOS has loaded and checked the hardware, it then passes instruction to the next process along to boot the system.

Motherboard Layout and Design
Motherboard Example


The Von Neumann Architecture designs were fundamental in the design of the first CPU.

It is the piece of hardware within a computer that carries out all the instructions and operations of a computer program by performing the basic arithmetical, logical, and input/output operations of the system.

The CPU uses Fetch>Decode>Execute cycles to process and execute machine code, which is the low level program code which has been translated from high level code so that the computer is able to understand it at a Binary level.

  1. The CPU Fetches an instruction from Memory
  2. The CPU Decodes or interprets the instruction
  3. The CPU Executes the instruction

The speed at which the CPU performs this process is determined by the Clock Speed. This is as it sounds, it acts like a ticking clock. the faster the clock ticks the faster the instructions are fetched, decoded and executed. The clock speed is measured in MHz, and current CPU specification is on average 3400MHz or 3.4GHZ.

The Intel Core i7 6950X is a 3.0GHz cpu which can execute 106 clock cycles per second. i.e 106 instructions processed per second.

That same i7 6950X as above also has 10 cores. A core is basically the main part of the CPU. This is the section of the chip that does the processing of instructions. it basicall means multiple CPU’s one chip. Due to the overall size of the chip they have had to shrink the architecture down massively to fit them all on. they use a 14 nanometer shrink. to put it into scale 1 nanometer is one billionth of a meter.

Having more cores allows the CPU to execute more instruction not only quicker but simultaneously, speeding up compute time massively.

The fastest memory available in a computer is the registers and ‘level 1 cache’ memory built into each CPU core, but even with modern processors the amount of data that can be stored here is very limited. Elsewhere on the CPU chip, and connected to the rest of the computer via the bus is the level 2 cache – larger than the level 1 cache, and somewhat slower, but still much faster to access than the main memory.

The Registers and Cache memory allow the CPU a temp storage area, which it is able to park data that needs to be processed. The proximity to the CPU cores is as close as it can get meaning that it is super fast, allowing it to be addressed incredibly fast.

Central Processing Unit (CPU)
Intel's 4 Core i5 CPU
Intels new 18 Core i9-7980XE CPU

Main Memory

For a computer to run, it requires its programs and data that it process to be stored somewhere while being executed by the CPU. In modern computers the same memory area is used to store both.

Connected to the motherboard and running on the internal bus is the main memory. This is fast, high capacity memory called RAM (Random Access Memory). it is called RAM as allocation of data to and from it is random. The data will be written to the next available slot and removed once no longer required.

RAM is ‘volatile’. This means that it requires constant power supplied to it in order for it to function and retain any data. If the power is lost to the computer the contents of the memory is erased and cannot be recovered.

Because of this issue it is only used for temporary storage of data or program instructions only while the CPU requires it.

Another type of Memory is called ROM (Read Only Memory). This type of memory is designed to have data or instructions/programs hard coded onto it. This type of memory is non-volatile which means it does not need a constant power supply to retain its information. Data can however only be read from the chip, and requires specialist tools to be able to write information to it.

A common use for a ROM chip is to store the computers initial boot program called a bootstrapper. This loads first then tells the computer to search for a storage medium or network to boot the Operating System from.

ROM chips have evolved over time from non erase hard coded to the newer type found more commonly in computers nowadays. These are called EEPROM (Electrically Erasable Programmable Read Only Memory). This means the chip can now be erased and reprogrammed. An example of this is the BIOS chip on the motherboard.

RAM Modules
Fast Gaming DDR4 RAM
Examples of EEPROM Chips

Secondary Storage

So when programs require access to data quickly and efficiently the computer uses the main memory, however it is volatile so data loss occurs when the computer is switched off.

To overcome this we use a Secondary Storage method to hold all of our data. Images, music, documents etc can all be stored permanently on different types of secondary storage.

The most common is a magnetic disk or HDD (Hard Disk Drive). This uses a method of storing data on a spinning platter that contains thousands of tiny magnets, each with a + and – pole, this can represent Binary 1’s and 0’s.

The platters spin at up 15000RPM, and the data is read from a magnetic head, which moves backwards and forwards across the surface of the platter. This arm is moved by a motor.

Compared to RAM which averages about 8GB in total in most computers, secondary storage methods tend to be a lot higher capacity.

Historically HDD sizes were incredibly small averaging 10-20MB in 1985. Commpny these days it would be usual to find a desktop computer with at least 1TB.

It took 51 years before hard disk drives reached the size of 1 TB (terabyte, i.e. 1,000 GB). This happened in 2007. In 2009, the first hard drive with 2 TB of storage arrived. So while it took 51 years to reach the first terabyte, it took just two years to reach the second.

The video below explains some of the types of Secondary Storage i have learnt about. This was created on my SKE course.

Internal View of a HDD
Magnetic reader head from a HDD
Layout of a HDD Platter