Functional Programming For Dummies
Book image
Explore Book Buy On Amazon
You use three different operations to perform tasks using lambda calculus: creating functions to pass as variables; binding a variable to the expression (abstraction); and applying a function to an argument. The following describes all three operations that functional programmers can view as rules that govern all aspects of working with lambda calculus.

Working with variables in lambda calculus

When considering variables in lambda calculus, the variable is a placeholder (in the mathematical sense) and not a container for values (in the programming sense). Any variable, x, y, or z, (or whatever identifier you choose to use) is a lambda term. Variables provide the basis of the inductive (the inference of general laws from specific instances) definition of lambda terms. To put this in easier-to-understand terms, if you always leave for work at 7:00 a.m. and are always on time, inductive reasoning says that you will always be on time as long as you leave by 7:00 a.m.

Induction in math relies on two cases to prove a property. For example, a common proof is a property that holds for all natural numbers. The base (or basis) case makes an assumption using a particular number, usually 0. The inductive case, also called the inductive step, proves that if the property holds for the first natural number (n), it must also hold for the next natural number (n + 1).

Variables may be untyped or typed. Typing isn’t quite the same in this case because types are for other programming paradigms; the use of typing doesn’t actually indicate a kind of data. Rather, it defines how to interpret the lambda calculus.

Untyped variables in lambda calculus

The original version of Church’s lambda calculus has gone through a number of revisions as the result of input by other mathematicians. The first such revision came as the result of input from Stephen Kleene and J. B. Rosser in 1935 in the form of the Kleene–Rosser paradox. (The article on Quora provides a basic description of this issue.) A problem exists in the way that logic worked in the original version of lambda calculus, and Church fixed this problem in a succeeding version by removing restrictions on the kind of input that a function can receive. In other words, a function has no type requirement.

The advantage of untyped lambda calculus is its greater flexibility; you can do more with it. However, the lack of type also means that untyped lambda calculus is nonterminating. In some cases, you must use typed lambda calculus to obtain a definitive answer to a problem.

Simply-typed variables in lambda calculus

Church created simply-typed lambda calculus in 1940 to address a number of issues in untyped lambda calculus, the most important of which is an issue of paradoxes where β-reduction can’t terminate. In addition, the use of simple typing provides a means for strongly proving the calculus.

Using application in lambda calculus

The act of applying one thing to another seems simple enough. When you apply peanut butter to toast, you get a peanut butter sandwich. Application in lambda calculus is almost the same thing. If M and N are lambda terms, the combination MN is also a lambda term. In this case, M generally refers to a function and N generally refers to an input to that function, so you often see these terms written as (M)N. The input, N, is applied to the function, M. Because the purpose of the parentheses is to define how to apply terms, it’s correct to refer to the pair of parentheses as the apply operator.

Understanding that application infers nesting is essential. In addition, because lambda calculus uses only functions, inputs are functions. Consequently, saying M2(M1N) would be the same as saying that the function M1 is applied as input to M2 and that N is applied as input to M1.

In some cases, you see lambda calculus written without the parentheses. For example, you might see EFG as three lambda terms. However, lambda calculus is left associated by default, which means that when you see EFG, what the statement is really telling you is that E is applied to F and F is applied to G, or ((E)F)G. Using the parentheses tends to avoid confusion. Also, be aware that the associative math rule doesn’t apply in this case: ((E)F)G is not equivalent to E(F(G)).

To understand the idea of application better, consider the following pseudocode:
inc(x) = x + 1
All this code means is that to increment x, you add 1 to its value. The lambda calculus form of the same pseudocode written as an anonymous function looks like this:
(x) -> x + 1
You read this statement as saying that the variable x is mapped to x + 1. However, say that you have a function that requires two inputs, like this:
square_sum(x, y) = (x<sup>2</sup> + y<sup>2</sup>)
The lambda calculus form of the same function written in anonymous form looks like this:
(x, y) -> x<sup>2</sup> + y<sup>2</sup>
This statement is read as saying that the tuple (x, y) is mapped to x2 + y2. However, as previously mentioned, lambda calculus allows functions to have just one input, and this one has two. To properly apply the functions and inputs, the code would actually need to look like this:
x -> (y -> x<sup>2</sup> + y<sup>2</sup>)
At this point, x and y are mapped separately. The transitioning of the code so that each function has only one argument is called currying. This transition isn't precisely how you see lambda calculus written, but it does help explain some of the underlying mechanisms.

Using abstraction in lambda calculus

The term abstraction derives from the creation of general rules and concepts based on the use and classification of specific examples. The creation of general rules tends to simplify a problem. For example, you know that a computer stores data in memory, but you don’t necessarily understand the underlying hardware processes that allow the management of data to take place. The abstraction provided by data storage rules hides the complexity of viewing this process each time it occurs. The following information describes how abstraction works for both untyped and typed lambda calculus.

Abstracting untyped lambda calculus

In lambda calculus, when E is a lambda term and x is a variable, λx.E is a lambda term. An abstraction is a definition of a function, but doesn’t invoke the function. To invoke the function, you must apply it.
<em>f</em>(x) = x + 1
The lambda abstraction for this function is
λx.x + 1
Remember that lambda calculus has no concept of a variable declaration. Consequently, when abstracting a function such as
<em>f</em>(x) = x<sup>2</sup> + y<sup>2</sup>
to read
λx.x<sup>2</sup> + y<sup>2</sup>
the variable y is considered a function that isn’t yet defined, not a variable declaration. To complete the abstraction, you would create the following:
λx.(λy.x<sup>2</sup> + y<sup>2</sup>)

Abstracting simply-typed calculus

The abstraction process for simply-typed lambda calculus follows the same pattern as described for untyped lambda calculus, except that you now need to add type. In this case, the term type doesn’t refer to string, integer, or Boolean — the types used by other programming paradigms. Rather, type refers to the mathematical definition of the function’s domain (the set of outputs that the function will provide based on its defined argument values) and range (the codomain or image of the function), which is represented by A -> B. All that this talk about type really means is that the function can now accept only inputs that provide the correct arguments, and it can provide outputs of only certain arguments as well.

Alonzo Church originally introduced the concept of simply-typed calculus as a simplification of typed calculus to avoid the paradoxical uses of untyped lambda calculus. A number of lambda calculus extensions also rely on simple typing including: products, coproducts, natural numbers (System T), and some types of recursion (such as Programming Computable Functions, or PCF).

The important issue discussed here is how to represent a typed form of lambda calculus statement. For this task, you use the colon (:) to display the expression or variable on the left and the type on the right. For example, referring to the increment abstraction shown above, you include the type, as shown here:

λx:ν.x + 1
In this case, the parameter x has a type of ν (nu), which represents natural numbers. This representation doesn't tell the output type of the function, but because + 1 would result in a natural number output as well, it’s easy to make the required assumption. This is the Church style of notation. However, in many cases you need to define the type of the function as a whole, which requires the Curry-style notation. Here is the alternative method:
(λx.x + 1):ν -> ν
Moving the type definition outside means that the example now defines type for the function as a whole, rather than for x. You infer that x is of type ν because the function parameters require it. When working with multiparameter inputs, you must curry the function as shown before. In this case, to assign natural numbers as the type for the sum square function, you might show it like this:
λx:ν.(λy:ν.x<sup>2</sup> + y<sup>2</sup>)
Note the placement of the type information after each parameter. You can also define the function as a whole, like this:
(λx.(λy.x<sup>2</sup> + y<sup>2</sup>)):ν -> ν -> ν
Each parameter appears separately, followed by the output type. A great deal more exists to discover about typing, but this discussion gives you what you need to get started without adding complexity. The article at Goodmath.org provides additional insights that you may find helpful.

When working with particular languages, you may see the type indicated directly, rather than indirectly using Greek letters. For example, when working with a language that supports the int data type, you may see int used directly, rather than the less direct form of ν that's shown in the previous examples. For example, the following code shows an int alternative to the λx:ν.x + 1 code shown earlier:

λx:int.x + 1

About This Article

This article is from the book:

About the book author:

John Paul Mueller has written more than 100 books and 600+ articles on everything from networking and home security to database management and heads-down programming. His technical editing talents have helped more than 70 authors refine and polish their manuscripts. John's books address every level of skill from beginning to advanced.

This article can be found in the category: