ONLamp.com
oreilly.comSafari Books Online.Conferences.

advertisement


Introduction to Haskell, Part 3: Monads

by Adam Turoff
08/02/2007

In Part 1 of this series, I described how programming languages are grouped into two families, mechanical and mathematical. Programmers also fall into two general categories, those who think mechanically about what a computer does as it executes a program and those who think mathematically about what a program means. Some programmers are gifted and hop between these two worlds as needed.

In Part 2, I described how Haskell is a purely functional programming language and how it uses pure functions to express programs. Unfortunately, pure functions are limiting, because they prohibit side effects and basic I/O.

This article describes monads, the mechanism Haskell uses to structure computation, enable side effects, and communicate with the outside world.

The Language Divide Revisited

Haskell comes from a long and rich history of research in mathematics and functional programming. Naturally, many mathematically inclined programmers are quite comfortable with Haskell, while more mechanically minded programmers are generally befuddled with the language and the mathematical concepts it draws upon.

No concept is more befuddling than the monad.

Monads come from a branch of mathematics known as category theory (See more about category theory on Wikipedia and Haskell.org). Unfortunately, category theory is an obscure branch of mathematics that most people never come across, even when studying the mathematical foundations of computer science.

The news isn't all bad. For example, SQL is built on a mathematical foundation of relational algebra. A programmer may think of a SQL query in terms of what the relational operators mean or in terms of what the database engine does. Whether or not a programmer is mathematically or mechanically inclined, he can still formulate SQL queries to examine the data in a database.

Similarly, mechanically inclined programmers do not need to learn category theory to understand what monads mean in order to understand how they work. But before I can explain what monads do and how they work, I need to explain one of Haskell's most innovative features, its type system.

The Type System

Because Haskell is so deeply tied to mathematics, its designers and practitioners are concerned with understanding what a program means. Haskell compilers attempt to understand what your program means through type analysis and type inferencing.

Languages like C, Java, and C# offer a simple form of static typing. Static typing in these languages involves adding annotations to every variable, function, and method declaration to tell the compiler how to allocate memory and generate code. Unfortunately, this kind of typing is quite verbose and very error prone. In these languages, the type system can also be subverted by type casting and by ignoring type errors from the compiler. As a result, it's not difficult to write code with type mismatch errors that cause a program to crash.

Languages like Perl, Python, and Ruby offer some form of dynamic typing. In these languages, little to no type checking is done when a program is compiled. Instead, type issues are deferred until runtime, and a program is assumed to be meaningful and well typed. This means type annotations are unnecessary, and programs in these languages are generally more concise than their counterparts in C and C-like languages. Type errors, if they do occur, generate exceptions at runtime, and may cause a program to crash; or they may be ignored, leading to odd and unexpected runtime behavior.

Haskell takes a very different approach. It starts by performing type analysis with a type checker to understand your program at compile time. The type checker strictly prohibits type casting and does not allow type errors to be ignored. Because types are checked at compile time, and there is no escape from the type checker, Haskell is frequently described as both statically typed and strongly typed.

Any Haskell program with a type error simply does not compile. Haskell programs that do compile are free from type errors, and this guarantee eliminates a whole class of crashes and unexpected runtime behaviors. (Type errors are merely one kind of problem that creep into a program. There are still many ways for a Haskell program to compile and misbehave at runtime.)

Additionally, Haskell uses type inferencing to deduce the types of functions and values, eliminating the need for verbose type annotations in most cases. These features combine to provide more safety than C-like static typing and code at least as concise as that found in dynamic languages.

For example, here is the definition of length, a function that takes any list and returns the number of values contained within it:

length [] = 0
length (x:xs) = 1 + length xs

Here is a type annotation the compiler could infer for this function:

length :: [a] -> Int

This annotation can be read as follows:

  • The :: separates the symbol being defined on the left (length) from its type signature on the right ([a] -> Int).
  • [a] -> Int describes a function that accepts a value of [a] and returns a value of type Int.
  • a is a type variable that can match any type.
  • [a] is a list of any kind of value.
  • Int is a native machine integer.

The compiler can infer this type because:

  • The value of the base case is 0, an Int.
  • The value of the normal case 1 + an Int, which must also be an Int.
  • length doesn't examine the contents of the list, so the type of elements contained in the list must not matter.

Although type annotations aren't strictly required, it's a good idea to include them directly above a function definition. This helps the compiler check that your idea of a function's type matches the actual type for a function. More importantly, the type annotation also acts as documentation for others who are reading your code. A better way to define length would be:

length :: [a] -> Int
length [] = 0
length (x:xs) = 1 + length xs

Here is a definition of the higher order function map, which applies a function to every element in a list to create a new list:

map :: (a -> b) -> [a] -> [b]
map f (x:xs) = (f x):(map f xs)

The type annotation here means the following:

  • map is a function that takes two values ((a -> b) and [a]), and returns a third value ([b]).
  • The first value is a function that takes a value of an arbitrary type a, and returns a value of an arbitrary type b.
  • The second value is a list of type a.
  • The result value is a list of type b.
  • The types a and b can be any two types, but the type a in the first argument must match the type a, in the second argument, just as the type b in the first argument must match the type b in the result.

Here is an example of a function using map:

f t = map length (lines t)

The compiler can infer the type of this function as:

f :: String -> [Int]

This is because:

  • lines accepts a string argument and returns a list of strings (i.e., String -> [String]).
  • length accepts a list of any type and returns an Int (i.e. [a] -> Int).
  • In this instance, map takes the function length to convert a list of strings (a list of lists of characters) into a list of numbers.

Therefore, the type of f must be String -> [Int] (sometimes phrased as [Char] -> [Int]).

Type Classes

So far, type inferencing seems useful for deducing the type of a function when either specific types or generic types are used. However, this leaves a huge gap in the middle—functions where any one of a group of related types may be used.

This is where type classes come in. A type class unifies a set of interchangeable, related types. Here is a typical example, the Num type class that defines a series of operations that can be used with any kind of number:

class Num a where
  (+) :: a -> a -> a
  (-) :: a -> a -> a
  (*) :: a -> a -> a
  ...

This class declaration means that any type a that is a member of type class Num must implement the three operations +, -, and * (and a few others not listed here). Furthermore, the implementation of + for any type that implements this interface must take two values and return a value. All three values must also be of the same numeric type.

Type class declarations define the interface a type must implement to be a member of that type class. An instance declaration ties together this interface with a specific implementation for a single type. Here are portions of the instance declarations that specify that Int and Double are both members of type class Num:

-- low level multiplication of 2 integers
mulInt :: Int -> Int -> Int
mulInt a b = ... 

instance Num Int where
    (*) = mulInt
    ...

-- low level multiplication of 2 doubles
mulDouble :: Double -> Double -> Double
mulDouble a b = ...

instance Num Double where
    (*) = mulDouble
    ...

The type checker knows how to infer when a type class is needed to describe a function's type. Here is the simple function from the previous article that squares numbers:

square x = x * x

The type inferencer deduces the following type for this function:

square :: (Num a) => a -> a

This type signature is a little different, because it includes an added constraint, namely the implication (Num a) to the left of the => symbol. This means that the function square is a function that takes any value of type a, and returns a value of the same type a, so long as that type is also a member of the type class Num.

Pages: 1, 2

Next Pagearrow





Sponsored by: