Algorithms and Calculi

(c) 2011 Fabian M. Suchanek

http://www.suchanek.name

version: 2012-05-27

This is an overview of the concepts of algorithms and calculi. It serves as a supplement to my essay Thoughts on Rationality. By reading this essay, you acknowledge that the author does not accept any responsibility for the completeness or correctness of this essay. If you have comments, please send a mail to firstName@lastName.name .

Algorithms

Algorithms are lists of instructions. This section will define algorithms formally, and describe the properties that algorithms can have.

Algorithms

An algorithm is a finite list of well-defined instructions [Wikipedia / Algorithm]. For example, a cooking recipe is an algorithm. Here is an example of such an algorithm:
  1. Heat butter in a pan
  2. Open an egg
  3. Pour the egg into the pan
  4. Wait until the egg is solid
  5. Get the egg out of the pan
This recipe for a fried egg is a finite list of well-defined instructions — it is an algorithm.

In computer science, one is mostly concerned with algorithms that can be executed by a computer. Such algorithms receive a certain input. The input is, in the most general case, a string of characters. This can be a natural language sentence, a number ("3.14159265") or a logical formula. Then the computer executes the instruction of the algorithm. In the end, the computer returns an output, which is again a string of characters.

Here is an example of such an algorithm:

  1. Be X the input number
  2. Set Y to the number zero
  3. If X Divide X by two
  4. Increase Y by one
  5. Jump to Line 3
  6. Return Y as output
This algorithm computes how often a number can be divided by two (i.e, it computes the integer logarithm of X for the base two). For example, assume that we start the algorithm with X=12. Line 2 sets Y=0. Line 3 does not have any effect, as X>2. Line 4 will divide X by two, yielding 6. Line 5 will increase Y from zero to one. Line 6 will jump to line 3. This process repeats itself until, in line 7, X has value 1.5 and Y has value 3. This value is the output, telling us that we can divide 12 by two for 3 times in a row.

This is a very simple algorithm. Real algorithms have thousands of lines of instructions. They form the basis of any computer program. The process of executing an algorithm is called computing.

Formalisms

An algorithm receives a certain input and computes a certain output (>Algorithms). Algorithms can contain instructions of varying complexity. Consider for example the following two algorithms:
Algorithm 1 Algorithm 2
  1. Let X be the input number
  2. compute Y=2*X
  3. output Y
  1. Let X be the input number
  2. compute the integral of f(x)=3x+42*cos(x2) between 0 and X
  3. output the result

The first algorithm is rather simple. The second algorithm, however, contains an instruction that is very complicated: Computing the integral of a function. If we allow more complex instructions, then our algorithms become much shorter. At the same time, we need to master much more complicated instructions. The type of instructions that we allow in an algorithm is called a formalism.

Turing complete formalisms

A formalism (>Formalisms) defines the type of instructions that we will allow in an algorithm (>Algorithms). We will now present some popular formalisms. It is a common assumption [Wikipedia / Church-Turing thesis] that anything that any formalism can compute can also be computed by the formalisms given here.
Lambda calculus
The Lambda calculus is a formalism that has only 3 basic rules [Wikipedia / Lambda calculus]. The "algorithm" in the lambda calculus is a function expressed by a lambda-term. The input is, likewise, a lambda term. Running the algorithm means applying the function to the input.
Combinatory logics
Combinatory logics can be seen as a reduced version of the lambda calculus with only 3, 2, or 1 operator [Wikipedia / Combinatory logic].
Turing machines
Turing machines are simplistic small computers [Wikipedia / Turing machine]. The input is a sequence of characters. The algorithm consists of a state-transition function, which moves an imaginary read/write head over the sequence of characters.
The P'' system
The P'' system is a simplistic computer than only understands 4 instructions [Wikipedia / P'']. The corresponding programming language is (aptly) called brainfuck [Wikipedia / Brainfuck]. The input is, as for Turing machines, a sequence of characters. The algorithm is a long sequence of the basic 4 instructions, which move an imaginary read/write head over the input.
Unrestricted grammars
Unrestricted grammars are sets of search/replacement rules that work on sequences of characters [Wikipedia / Unrestricted grammar]. Variants or synonyms include Semi-Thue-systems, Post-systems, Type-0 grammars, Markov algorithms, string rewriting systems, and production systems. The input is a sequence of characters. The algorithm is a set of replacement rules. Running the algorithm on the input means basically applying the replacement rules until no more rule can be applied.
Programming languages
A programming language is a formal language in which one can instruct a (real) computer [Wikipedia / Programming language]. The input can be any sequence of characters, or, indeed, any computer-processable object. The algorithm is a list of instructions in the programming language.
The Universal Replacement System
The Universal Replacement System (URS) is a particularly simple formalism that I developed myself. It knows only one single instruction. See here for a description.
These systems are all equally powerful. That is: If we know that one formalism can compute something for a certain input, then every other system can do the same. Therefore, all of these systems are called Turing complete.

Interpreters

An interpreter is an algorithm (>Algorithms) that takes as input another algorithm. Then, the interpreter algorithm executes the input algorithm.

Let us look at an example. The following algorithm is an interpreter algorithm for cooking recipes. That means that it receives as input a cooking recipe. Then, the interpreter executes the input recipe, as if it were the cook.

  1. Be A the input recipe
  2. Look at the first instruction in A
  3. If the instruction says "heat butter", then heat butter
  4. If the instruction says "stir", then stir
  5. ...
  6. If you have reached the last instruction in A, then halt
  7. Otherwise, look at the next instruction and jump to 3

This algorithm "interprets" a recipe that is given by A. If A is a recipe for pancakes, then the above algorithm will produce pancakes. If A is an algorithm for Spaghetti Bolognese, then the algorithm will produce Spaghetti Bolognese. That is: the algorithm acts like a cook.

It might seem of little interest to duplicate an algorithm this way. Yet, note that the interpreter algorithm will always be the same — no matter how complex the input algorithm is. Even if the input recipe contains "heat butter" in three different places, the interpreter algorithm will contain the instruction only once. In this way, the complexity of the recipe has shifted completely from the interpreter algorithm to the input algorithm. This is of theoretical interest, as we shall see.

In the world of computer-processable algorithms (>Algorithms), an interpreter algorithm takes as input another algorithm and interprets it just like a cooking recipe. Interpreter algorithms exist for all Turing complete formalisms (>Turing):

Lambda calculus
The Lambda calculus has an interpreter that is a function that interprets another function [Wikipedia / Binary Lambda calculus].
Combinatory logics
Combinatory logics are equivalent to the lambda calculus and, hence, have a corresponding interpreter.
Turing machines
The interpreter for Turing machines is called a "Universal Turing Machine" [Wikipedia / Universal Turing machine].
The P'' system
An interpreter exists for the P'' system because of its Turing completeness. It's just that people have not bothered finding and publishing it. Some inspiration can be found here.
Unrestricted grammars
Unrestricted grammars have an interpreter because of their Turing-completeness. Such an interpreter is not obvious, but in the worst case one could use the known transformation between Turing Machines and unrestricted grammars. An alternative idea is to introduce additional symbols into the input string. These act as markers and can "carry" symbols from the rule that is being applied to the place where it is being applied.
Programming languages
Some program languages (such as LISP and C) have practically applied interpreters.

Formalism Complexity

As we have seen, formalisms can allow instructions of different complexity (>Formalisms). Some formalisms allow only very basic instructions, while other formalisms allow very complex instructions. Yet, since all Turing-complete formalisms have the same expressive power, the complexity always hides somewhere.

The complexity of a formalism can hide in 3 places:

  1. The complexity can hide in the instructions of formalism itself. The syntax can allow only very basic instructions or also very complex instructions.
  2. The complexity can hide in the algorithm (>Algorithms). If the formalism allows only very basic instructions, then the algorithms are longer, because it takes more "small" instructions to describe the same procedure.
  3. The complexity can hide in the input. As we have seen (>Interpreters), we can always shift complexity from the algorithm to the input.

To compare the complexity of the formalisms, one could construct a Turing machine (>Turing) that can execute an algorithm in that formalism. For example, we can devise a Turing machine that computes a Lambda function. We can also devise a Turing machine that execute a P'' algorithm. By comparing the sizes of the two Turing machines, we can get an indication for the complexity of the formalisms.

Here, we only note that the Lambda Calculus is as least as complex as Combinatory logic, because the Lambda Calculus has more operators and variables. We further note that the lambda calculus requires the replacement of terms by other terms. Thereby, it is at least at complex as unrestricted grammars. In fact, already the definition of the lambda calculus uses substitutions. We further note that an unrestricted grammar requires insertion.

The search for the least complex formalism is the focus of ongoing research. A Turing-complete formalism that is as minimalistic as possible is called a Turing Tarpit [Wikipedia / Turing tarpit]. Turing tarpits are interesting mainly because of theoretical importance (or idealism). These formalisms are typically completely unintelligible to humans. Due to the Turing-completeness, all other formalisms can be reduced to a Turing tarpit. In all of the following, we make observations that are independent of the particular formalism.

Termination

An algorithm allows computing an output from an input in a systematic fashion (>Algorithms). Certain algorithms do not terminate. These algorithms run forever. This means that when we follow the instructions precisely, we can never stop. Here is an example:
  1. say "hello"
  2. jump to Line 1
This algorithm does not terminate. Non-terminating algorithms are of importance for the concept of computability (>Computability).

Computability

An algorithm allows computing an output from an input in a systematic fashion (>Algorithms). There are certain things that an algorithm cannot compute on principle — no matter how much knowledge it has or how long we let it compute.

To see this, assume that someone gives you an algorithm on a sheet of paper and asks you whether it terminates (>Termination). If the algorithm contains a loop (such as the one in >Termination), then the answer is simple: This algorithm will obviously not terminate. However, real algorithms can have millions of instructions and then it is less clear whether the algorithm terminates. Also, we cannot simply run the algorithm and see whether it terminates. This is because we would not know when we can be sure that the algorithm will really run for ever. Rather, we want to analyze the algorithm without running it. More precisely, we want to list the steps that are necessary to find out whether an given algorithm A terminates. These steps could look for example as follows:

  1. If A contains no "jump to" instructions, then A terminates
  2. If A contains a "jump to" instruction that induces a loop, then A does not terminate
  3. ...
This list of steps is by itself an algorithm. It is an algorithm that decides whether another algorithm terminates. The input is another algorithm and the output is "true" if the other algorithm terminates and "false" otherwise. The problem is that such an algorithm cannot exist. There can be no algorithm that decides whether another algorithm terminates or not. Why is this? Assume that there was such an algorithm T, i.e. an algorithm which, given another algorithm on a sheet of paper, decides whether this given algorithm terminates or not. Then, we can construct the following algorithm (call it A*):
  1. Run T on A*
  2. If T says that A* terminates, then jump to line 2
  3. Else terminate
This algorithm asks T whether A* (that is: this very algorithm above) terminates. If T says that A* terminates, then A* does not terminate. If T says that A* does not terminate, than A* terminates. Hence, T is always wrong. An algorithm like T, that decides whether another algorithm terminates, cannot exist.

This implies that there are certain questions that an algorithm can never answer — no matter how much knowledge it has or how long we let it compute. Such problems are called undecidable.

Calculi

Calculi are formal systems for mathematical proofs. This section will introduce the general concepts of proofs and calculi.

Statements

Statements in the sense of calculi are only declarative sentences, i.e. sentences that can be true or false. For example, the following sentence is a statement:
The Earth is flat.
Obviously, this is a false statement. However, it is still a statement.

Formal languages

For proofs (>Proofs), natural language is often not precise enough. Therefore, one often encodes the statements (>Statements) in a more precise language — a formal language.

By far the most popular formal language is First Order Logic (FOL) [Wikipedia / First order logic]. To say that there exists a man whom all women love, FOL would write

∃ x: ∀ y: loves(y,x)

There exist other formal languages as well. For the present text, we stay with natural language.

Rules of Inference

A rule of inference is a rule that says that certain statements logically entail other statements (see [Wikipedia / Rules of Inference]). This means that if we know that the first statements are true, then we know that the second statements are also true. For example, an inference rule could tell us that if we have a statement X and the implication X => Y, then we may deduce Y. More concretely, assume that we have the following statements:
Whenever it rains, the street gets wet
It rains now.
Then the inference rule tells us that we can deduce:
The street gets wet now.

The difference between a rule (>Rules) and a rule of inference is very subtle: A rule is a statement. In the example, a rule is "Whenever it rains, the street gets wet". An rule of inference is a principle that allows us to actually apply the rule and to accept the conclusion of the rule as a new fact. In the example, the inference rule will allow us to accept "The street gets wet now" as a new fact. This process is called deduction.

The Instantiation Principle

The instantiation principle is a rule of inference (>Inference) that says that if we have a rule, then we may deduce all instantiations of that rule.

For example, assume that we have the following rule:

If X is a dog, then X is a mammal.
Then the inference rule tells us that we can deduce the following facts:
If Bello is a dog, then Bello is a mammal.
If Snoopy is a dog, then Snoopy is a mammal.
If Alice is a dog, then Alice is a mammal.
...

Modus Ponens

The Modus Ponens is a rule of inference (>Inference) that says:
If we have the statements A, B, C
and we have rule "A and B and C => D"
then we may deduce D
(with arbitrarily many premises in any order)
Modus Ponens also works if there is only one premise. As an example with two premises, assume that we have the following statements:
Bob loves Mary.
Bob is rich.
If Bob loves Mary and Bob is rich, then Bob will buy a horse for Mary.
Then, Modus Ponens allows us to deduce:
Bob will buy a horse for Mary.

This makes the Modus Ponens an almost trivial mechanism of inference. Yet, it is the basic mechanism of all proofs.

Assumptions

An assumption is a statement that one takes for true for the sake of the argument. Typically, one chooses the assumptions in such a way that everybody agrees on the assumptions. Examples are

One can also choose hypothetical assumptions. These serve for "what-if" analyses. For these analyses, one assumes that certain statements are true. Then, one deduces the consequences of these statements

Hypotheses

A hypothesis is a statement of which we want to know whether it is true. We can think of them as questions: We will see how we can use proofs (>Proofs) to establish the truth of a hypothesis.

Axiom Schemata

An axiom schema is a statement that contains variables. These variables stand for statements. Here is a simple example:
If X and Y, then we know that X

In this example, X and Y stand for statements. If we replace the variables by statements, we get a usual statement:

X=Bob eats
Y=Lisa drinks
If Bob eats and Lisa drinks, then we know that Bob eats

Proofs

Informally speaking, a proof is a process that establishes that a hypothesis (>Hypotheses) is true. A proof usually relies on a number of assumptions (>Assumptions). The proof establishes that, if the assumptions are true, then the hypothesis has to be true as well. Technically speaking, the proof establishes that the hypothesis necessarily follows from the assumptions.

In general, a proof can have three outcomes:

  1. The proof can find that the hypothesis follows from the assumptions (i.e., that the hypothesis is true)
  2. The proof can find that the negation of the hypothesis follows from the assumptions (i.e., that the hypothesis is false)
  3. The proof can find that neither the hypothesis nor its negation follow from the assumptions. That is, if the assumptions are true, the hypothesis can be either true or false.

Calculus

A calculus is the formal context in which one performs proofs (>Proofs). A calculus defines
  1. A formal language in which all statements are formulated (>FormalLanguages)
  2. A set of inference rules that the proof may use (>Inference)
  3. Axiom schemata that the proof can use as assumptions (>Axioms)

The most popular calculi are the following:

Q0
Q0 is a deduction system that resembles the Lambda calculus [Wikipedia / Q zero]
Natural deduction
Natural deduction is a deduction system that aims to be as close to human thinking as possible [Wikipedia / Natural deduction]. It has no axioms, but many rules of inference.
Hilbert systems
Hilbert systems are related to natural deduction systems [Wikipedia / Hilbert-style deduction systems]. Hilbert systems have more axioms and less rules of inference.
Sequent calculus
The sequent calculus is based on one single type of statements, which are a kind of generalized implications [Wikipedia / Sequent calculus].
The Tableaux method
The tableaux method treats proofs in a graphical form [Wikipedia / Tableaux method].
Resolution
Resolution assumes that all statements are of the same form. It has only one inference rule [Wikipedia / Resolution].
We will now go on to see that a calculus is basically a formalism for algorithms (>Formalisms).

Deduction systems

A deduction system can be seen as an algorithm (>Algorithms) that aims to prove a hypothesis (>Proofs). It works in the context of a calculus, i.e., in the context of given axioms (>Axioms), a formal language (>FormalLanguages), and rules of inference (>Inference). The deduction system receives as input the assumptions (>Assumptions) and the hypothesis (>Hypotheses). It returns as output "true" if the hypothesis follows from the assumptions. Otherwise, it returns "false".

The simplest deduction systems are algorithms that work bottom-up. They produce essentially a constructive proof for the hypothesis. The algorithm starts off with the assumptions. It uses rules of inference (>Inference) to deduce new statements that logically follow from the assumptions. It keeps deducing new statements, until it finally suceeds in deducing the hypothesis. This means that the hypothesis is a logical conclusion from the assumptions. In this case, the algorithm retuns "true". If the algorithm has derived everything that can be derived, and the hypothesis is not among the derived statements, the algorithm returns "false". In the simplest case, the algorithm just enumerates systematically everything that can be deduced from the assumptions, until it hits the hypothesis. Other, smarter, deduction systems can be conceived.

Calculi as Formalisms

A deduction system is an algorithm that works on statements (>Deduction). Like every algorithm, the deduction system works in the frame of a formalism (>Formalisms). The formalism defines the type of actions that we may use. These actions of a deduction system are the rules of inference (>Inference) of the calculus (>Calculus). The algorithm can choose to apply these rules in different order to different statements.

The deduction system receives as input the assumptions (>Assumptions). It adds to these assumptions the axioms (>Axioms) of the calculus. As with algorithms (>Complexity), we can always shift the complexity of the deduction system either to the input (more axioms) or to the formalism (more rules of inference).

This means that a calculus can be seen as a kind of programming language. A program in that programming language is an algorithm that tries to prove a hypothesis (>Proofs). In other words: A calculus is a formalism for algorithms.