Semiconductors are responsible for the computer revolution, which began in the mid 20th century and is still continuing. Miniaturization of computer components, transistors and diodes made of semiconductors such as silicon and germanium, has resulted in computers becoming smaller, faster, and having larger memory capacity according to Moore’s Law, an empirical law which states that computer speed and memory capacity both double approximately every 18 to 24 months. As a result, computers are now over a billion times faster and have over a billion times the memory capacity of ENIAC, the world’s first computer, built in 1946.

A semiconductor is an element which is intermediate between a conductor and an insulator. There are two types of semiconductors, denoted P-type and N-type. N-type semiconductors have a small excess of electrons, provided by a small amount of impurity added to them. P-type semiconductors have a small excess of “holes”, i.e. missing electrons. Electrons can flow from the junction of an N-type to a P-type semiconductor but not the other way around. As a result, current can only flow one way across a P-N junction. This is what makes semiconductors so useful and has allowed them to take the place of vacuum tubes. Transistors and 2CL2FM both rely on the one-way property of current flow across P-N junctions.

What has made computers faster, smaller, and have more memory according to Moore’s Law has been the ability to miniaturize the components. ENIAC used large, clumsy vacuum tubes the size of one’s fist. Since the invention of the transistor in 1947, transistors and diodes, which have replaced vacuum tubes, have been made smaller and smaller. Now these components are etched on chips, their size being the order of 100 nanometers. Eventually Moore’s Law will break down though, once transistor size reaches the atomic scale. This is estimated to occur by around the year 2019