CS191 2019-02-13
Zhu, Justin

CS191 2019-02-13
Wed, Feb 13, 2019


Shannon 1938

Shannon was inspired by Boole. His work augments the bit calculation and logical systematization used by Boole. Any circuit is represented by a set of equations, the terms of the equations corresponding to the various relays and switches in the circuit.

It’s interesting to note, in Shannon’s case, we are looking specifically at formalizing the notation of circuits using mathematics. This is a time where physics and electrical engineering were more developed.

De Morgan’s Theorem can be proved using the method of perfect induction.
The reductions can be made using W, then X and Y such that $X_{ab} = W + X + Y + ZZ’ + ZS’V = W + X + Y + ZS’V$

How does Shannon expand upon Boole’s work? Where in the notation demonstrates this causal linkage between Shannon and Boole?

Addition is or gate (series), and multiplication and gate (parallel).

Shannon 1948

Shannon goes on to build on his work, adding many levels of complexity and application to his formalized bit system. He coins the unit “bit” and attributes this more aptly to John Tukey.

Doubling the time roughly squares the number of possible messages. Shannon’s model ultimately is made up of transmitters, channels, receivers, and destinations.

The logarithm of the number of possible signals in a discrete channel increases linearly with time is an important concept framed by Shannon.

The entropy equation in Shannon’s paper was created out of three criteria, these three criteria being $H$ should be continuous in the $p_j$, if all the $p_i$ are equal, H should be weighted sum of individual $H$ values, $H$ should be monotonically increasing.

$H$ is generally interpreted to be the logarithm of the reciprocal probability of a typical long sequence divided by the number of symbols in the sequence.

Hamming Paper

Hamming’s work identifies Hamming’s distance where finding an n-bit code was used to improve the teaching of mathematics.

Systematic codes are codes where each code symbol has n binary bits, resulting in less errors in communicating data.

These codes form three types, single error detecting codes, single error correcting codes, and Single Error Correcting Plus Double Error Detecting Codes.

A geometric model contains a distance and a metric which may require the use of these different code systems.

Moore’s circuitry talks about the errors and real semiconductors, there are no perfect errors. The reason anything works is because there exists error-handling codes. If you store 3.17, you get 3.17 out. The system immediately detects what is wrong.

Hamming distance 01110010 and 11100000 contain a distance of 3 apart. If we want a single-error correctly code, then we take a trick where a subset of those code words have a distance of 3. This turns into a sphere coding problem. That’s the background, more or less.

Attain near certainty where the probability of error on one coin flip is modest, but then we will certainly never get the wrong answer. Bound the error small enough so that we will never get this bit-flipping answers.

The computers are completely driven by getting the probability of error to a very small number, never 0.