If you use a computer and software every day, you owe much of its operational logic to Claude Elwood Shannon. Shannon was an American mathematician, electrical engineer, and cryptographer known as a “father of information theory.” He graduated from the University of Michigan with degrees in electrical engineering and mathematics in 1936 and went on to M.I.T., where he worked under computer pioneer Vannevar Bush on an analog computer called the “differential analyzer.”

The Fundamental Concept that Underlies Modern Computers

While studying the complicated ad hoc circuits of the differential analyzer, Shannon designed switching circuits based on the work of 19th-century mathematician George Boole. His master’s thesis in electrical engineering has been called the most important of the 20th century. In it, Shannon showed how Boole’s logical algebra could be implemented using electronic circuits of relays and switches. This most fundamental feature of digital computers’ design — the representation of “true” and “false” and “0” and “1” as open or closed switches, and the use of electronic logic gates to make decisions and to carry out arithmetic — can be traced back to the insights in Shannon’s thesis, which was published in 1938.

Digital Circuit Design

In his thesis, Shannon proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used during that time in telephone call routing switches. Next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter of his thesis, he presented diagrams of several circuits, including a 4-bit full adder.

Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannon’s work became the foundation of digital circuit design, as it became widely known in the electrical engineering community during and after World War II.

Shannon received his Ph.D. in mathematics from MIT in 1940, and in that same year became a National Research Fellow at the Institute for Advanced Study in Princeton, New Jersey. In Princeton, Shannon had the opportunity to discuss his ideas with influential scientists and mathematicians and had occasional encounters with Albert Einstein and Kurt Gödel. Shannon worked freely across disciplines, and this ability may have contributed to his later development of mathematical information theory.

In 1941, with a Ph.D. in mathematics under his belt, Shannon went to Bell Labs, where he worked on war-related matters, including cryptography, the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Unknown to those around him, he was also working on the theory behind information and communications.

Unbreakable Cryptography

A year after he founded and launched information theory, Shannon published a paper that proved that unbreakable cryptography was possible. (He did this work in 1945, but at that time it was classified.) The scheme is called the one-time pad or the Vernam cypher, after Gilbert Vernam, who had invented it near the end of World War I. The idea is to encode the message with a random series of digits — the key — so that the encoded message is itself completely random. The catch is that one needs a random key that is as long as the message to be encoded and one must never use any of the keys twice.

Shannon’s contribution was to prove rigorously that this code was unbreakable. To this day, no other encryption scheme is known to be unbreakable.

The Information Age

As the war was coming to an end in 1945, the National Defense Research Committee (NDRC) was issuing a summary of technical reports as a last step prior to its eventual closing down. Inside the volume on fire control, a special essay titled Data Smoothing and Prediction in Fire-Control Systems, coauthored by Shannon, Ralph Beebe Blackman, and Hendrik Wade Bode, formally treated the problem of smoothing the data in fire-control by analogy with “the problem of separating a signal from interfering noise in communications systems.”

In other words, it modeled the problem in terms of data and signal processing and thus heralded the coming of the Information Age.

Quantifying Information

Shannon defined the quantity of information produced by a source—for example, the quantity in a message—by a formula similar to the equation that defines thermodynamic entropy in physics. In its most basic terms, Shannon’s informational entropy is the number of binary digits required to encode a message. Today that sounds like a simple, even obvious way to define how much information is in a message.

In 1948, at the very dawn of the information age, this digitizing of information of any sort was a revolutionary step. In fact, his paper may have been the first to use the word “bit,” short for binary digit.

The Discovery of Our Modern “Bandwidth”

As well as defining information, Shannon analyzed the ability to send information through a communications channel. He found that a channel had a certain maximum transmission rate that could not be exceeded. Today, we call that the “bandwidth” of the channel.

Shannon demonstrated mathematically that even in a noisy channel with a low bandwidth, essentially perfect, error-free communication could be achieved by keeping the transmission rate within the channel’s bandwidth and by using error-correcting schemes: the transmission of additional bits that would enable the data to be extracted from the noise-ridden signal.

Shannon’s Mouse

“Theseus”, created by Shannon in 1950, was a mechanical mouse controlled by an electromechanical relay circuit that enabled it to move around a labyrinth of 25 squares. The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse could then be placed anywhere it had been before, and, because of its prior experience, could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior. Shannon’s mouse appears to have been the first artificial learning device of its kind.

Shannon’s End-of-Life Loss of Information

In the 1990s, in one of life’s tragic ironies, Shannon came down with Alzheimer’s disease, which is characterized by the insidious loss of information in the brain. One’s memories and personality progressively degrade until there is zero recallable information left. Shannon succumbed to the depredations of the affliction in February 2001. But some of the signal generated by Shannon lives on through the information technology that powers our lives in the 21st century.