How and why does the Haskell Cont monad work?

HaskellMonadsContinuations

Haskell Problem Overview


This is how the Cont monad is defined:

newtype Cont r a = Cont { runCont :: (a -> r) -> r }

instance Monad (Cont r) where
    return a = Cont ($ a)
    m >>= k  = Cont $ \c -> runCont m $ \a -> runCont (k a) c

Could you explain how and why this works? What is it doing?

Haskell Solutions


Solution 1 - Haskell

The first thing to realize about the continuation monad is that, fundamentally, it's not really doing anything at all. It's true!

The basic idea of a continuation in general is that it represents the rest of a computation. Say we have an expression like this: foo (bar x y) z. Now, extract just the parenthesized portion, bar x y--this is part of the total expression, but it's not just a function we can apply. Instead, it's something we need to apply a function to. So, we can talk about the "rest of the computation" in this case as being \a -> foo a z, which we can apply to bar x y to reconstruct the complete form.

Now, it happens that this concept of "the rest of the computation" is useful, but it's awkward to work with, since it's something outside of the subexpression we're considering. To make things work better, we can turn things inside-out: extract the subexpression we're interested in, then wrap it in a function that takes an argument representing the rest of the computation: \k -> k (bar x y).

This modified version gives us a lot of flexibility--not only does it extract a subexpression from its context, but it lets us manipulate that outer context within the subexpression itself. We can think of it as a sort of suspended computation, giving us explicit control over what happens next. Now, how could we generalize this? Well, the subexpression is pretty much unchanged, so let's just replace it with a parameter to the inside-out function, giving us \x k -> k x--in other words, nothing more than function application, reversed. We could just as easily write flip ($), or add a bit of an exotic foreign language flavor and define it as an operator |>.

Now, it would be simple, albeit tedious and horribly obfuscating, to translate every piece of an expression to this form. Fortunately, there's a better way. As Haskell programmers, when we think building a computation within a background context the next thing we think is say, is this a monad? And in this case the answer is yes, yes it is.

To turn this into a monad, we start with two basic building blocks:

  • For a monad m, a value of type m a represents having access to a value of type a within the context of the monad.
  • The core of our "suspended computations" is flipped function application.

What does it mean to have access to something of type a within this context? It just means that, for some value x :: a, we've applied flip ($) to x, giving us a function that takes a function which takes an argument of type a, and applies that function to x. Let's say we have a suspended computation holding a value of type Bool. What type does this give us?

> :t flip ($) True
flip ($) True :: (Bool -> b) -> b

So for suspended computations, the type m a works out to (a -> b) -> b... which is perhaps an anticlimax, since we already knew the signature for Cont, but humor me for now.

An interesting thing to note here is that a sort of "reversal" also applies to the monad's type: Cont b a represents a function that takes a function a -> b and evaluates to b. As a continuation represents "the future" of a computation, so the type a in the signature represents in some sense "the past".

So, replacing (a -> b) -> b with Cont b a, what's the monadic type for our basic building block of reverse function application? a -> (a -> b) -> b translates to a -> Cont b a... the same type signature as return and, in fact, that's exactly what it is.

From here on out, everything pretty much falls directly out from the types: There's essentially no sensible way to implement >>= besides the actual implementation. But what is it actually doing?

At this point we come back to what I said initially: the continuation monad isn't really doing much of anything. Something of type Cont r a is trivially equivalent to something of just type a, simply by supplying id as the argument to the suspended computation. This might lead one to ask whether, if Cont r a is a monad but the conversion is so trivial, shouldn't a alone also be a monad? Of course that doesn't work as is, since there's no type constructor to define as a Monad instance, but say we add a trivial wrapper, like data Id a = Id a. This is indeed a monad, namely the identity monad.

What does >>= do for the identity monad? The type signature is Id a -> (a -> Id b) -> Id b, which is equivalent to a -> (a -> b) -> b, which is just simple function application again. Having established that Cont r a is trivially equivalent to Id a, we can deduce that in this case as well, (>>=) is just function application.

Of course, Cont r a is a crazy inverted world where everyone has goatees, so what actually happens involves shuffling things around in confusing ways in order to chain two suspended computations together into a new suspended computation, but in essence, there isn't actually anything unusual going on! Applying functions to arguments, ho hum, another day in the life of a functional programmer.

Solution 2 - Haskell

Here's fibonacci:

fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

Imagine you have a machine without a call stack - it only allows tail recursion. How to execute fib on that machine? You could easily rewrite the function to work in linear, instead of exponential time, but that requires tiny bit of insight and is not mechanical.

The obstacle to making it tail recursive is the third line, where there are two recursive calls. We can only make a single call, which also has to give the result. Here's where continuations enter.

We'll make fib (n-1) take additional parameter, which will be a function specifying what should be done after computing its result, call it x. It will be adding fib (n-2) to it, of course. So: to compute fib n you compute fib (n-1) after that, if you call the result x, you compute fib (n-2), after that, if you call the result y, you return x+y.

In other words you have to tell:

How to do the following computation: "fib' n c = compute fib n and apply c to the result"?

The answer is that you do the following: "compute fib (n-1) and apply d to the result", where d x means "compute fib (n-2) and apply e to the result", where e y means c (x+y). In code:

fib' 0 c = c 0
fib' 1 c = c 1
fib' n c = fib' (n-1) d
           where d x = fib' (n-2) e
                 where e y = c (x+y)

Equivalently, we can use lambdas:

fib' 0 = \c -> c 0
fib' 1 = \c -> c 1
fib' n = \c -> fib' (n-1) $ \x ->
               fib' (n-2) $ \y ->
               c (x+y)

To get actual Fibonacci use identity: fib' n id. You can think that the line fib (n-1) $ ... passes its result x to the next one.

The last three lines smell like a do block, and in fact

fib' 0 = return 0
fib' 1 = return 1
fib' n = do x <- fib' (n-1)
            y <- fib' (n-2)
            return (x+y)

is the same, up to newtypes, by definition of monad Cont. Note differences. There's \c -> at the beginning, instead of x <- ... there's ... $ \x -> and c instead of return.

Try writing factorial n = n * factorial (n-1) in a tail recursive style using CPS.

How does >>= work? m >>= k is equivalent to

do a <- m
   t <- k a
   return t

Making the translation back, in the same style as in fib', you get

\c -> m $ \a ->
      k a $ \t ->
      c t

simplifying \t -> c t to c

m >>= k = \c -> m $ \a -> k a c

Adding newtypes you get

m >>= k  = Cont $ \c -> runCont m $ \a -> runCont (k a) c

which is on top of this page. It's complex, but if you know how to translate between do notation and direct use, you don't need to know exact definition of >>=! Continuation monad is much clearer if you look at do-blocks.

Monads and continuations

If you look at this usage of list monad...

do x <- [10, 20]
   y <- [3,5]
   return (x+y)

[10,20] >>= \x ->
  [3,5] >>= \y ->
    return (x+y)

([10,20] >>=) $ \x ->
  ([3,5] >>=) $ \y ->
    return (x+y)

that looks like continuation! In fact, (>>=) when you apply one argument has type (a -> m b) -> m b which is Cont (m b) a. See sigfpe's Mother of all monads for explanation. I'd regard that as a good continuation monad tutorial, though it wasn't probably meant as it.

Since continuations and monads are so strongly related in both directions, I think what applies to monads applies to continuations: only hard work will teach you them, and not reading some burrito metaphor or analogy.

Solution 3 - Haskell

EDIT: Article migrated to the link below.

I've written up a tutorial directly addressing this topic that I hope you will find useful. (It certainly helped cement my understanding!) It's a bit too long to fit comfortably in a Stack Overflow topic, so I've migrated it to the Haskell Wiki.

Please see: MonadCont under the hood

Solution 4 - Haskell

I think the easiest way to get a grip on the Cont monad is to understand how to use its constructor. I'm going to assume the following definition for now, although the realities of the transformers package are slightly different:

newtype Cont r a = Cont { runCont :: (a -> r) -> r }

This gives:

Cont :: ((a -> r) -> r) -> Cont r a

so to build a value of type Cont r a, we need to give a function to Cont:

value = Cont $ \k -> ...

Now, k itself has type a -> r, and the body of the lambda needs to have type r. An obvious thing to do would be to apply k to a value of type a, and get a value of type r. We can do that, yes, but that's really only one of many things we can do. Remember that value need not be polymorphic in r, it might be of type Cont String Integer or something else concrete. So:

  • We could apply k to several values of type a, and combine the results somehow.
  • We could apply k to a value of type a, observe the result, and then decide to apply k to something else based on that.
  • We could ignore k altogether and just produce a value of type r ourselves.

But what does all this mean? What does k end up being? Well, in a do-block, we might have something looking like this:

flip runCont id $ do
  v <- thing1
  thing2 v
  x <- Cont $ \k -> ...
  thing3 x
  thing4

Here's the fun part: we can, in our minds and somewhat informally, split the do-block in two at the occurrence of the Cont constructor, and think of the rest of the entire computation after it as a value in itself. But hold on, what it is depends on what x is, so it's really a function from a value x of type a to some result value:

restOfTheComputation x = do
  thing3 x
  thing4

In fact, this restOfTheComputation is roughly speaking what k ends up being. In other words, you call k with a value which becomes the result x of your Cont computation, the rest of the computation runs, and then the r produced winds its way back into your lambda as the result of the call to k. So:

  • if you called k multiple times, the rest of the computation will get run multiple times, and the results may be combined however you wish.
  • if you didn't call k at all, the rest of the entire computation will be skipped, and the enclosing runCont call will just give you back whatever value of type r you managed to synthesise. That is, unless some other part of the computation is calling you from their k, and messing about with the result...

If you're still with me at this point it should be easy to see this could be quite powerful. To make the point a little, let's implement some standard type classes.

instance Functor (Cont r) where
  fmap f (Cont c) = Cont $ \k -> ...

We're given a Cont value with bind result x of type a, and a function f :: a -> b, and we want to make a Cont value with bind result f x of type b. Well, to set the bind result, just call k...

  fmap f (Cont c) = Cont $ \k -> k (f ...

Wait, where do we get x from? Well, it's going to involve c, which we haven't used yet. Remember how c works: it gets given a function, and then calls that function with its bind result. We want to call our function with f applied to that bind result. So:

  fmap f (Cont c) = Cont $ \k -> c (\x -> k (f x))

Tada! Next up, Applicative:

instance Applicative (Cont r) where
  pure x = Cont $ \k -> ...

This one's simple. We want the bind result to be the x we get.

  pure x = Cont $ \k -> k x

Now, <*>:

  Cont cf <*> Cont cx = Cont $ \k -> ...

This a little trickier, but uses essentially the same ideas as in fmap: first get the function from the first Cont, by making a lambda for it to call:

  Cont cf <*> Cont cx = Cont $ \k -> cf (\fn -> ...

Then get the value x from the second, and make fn x the bind result:

  Cont cf <*> Cont cx = Cont $ \k -> cf (\fn -> cx (\x -> k (fn x)))

Monad is much the same, although requires runCont or a case or let to unpack the newtype.

This answer is already quite long, so I won't go into ContT (in short: it is exactly the same as Cont! The only difference is in the kind of the type constructor, the implementations of everything are identical) or callCC (a useful combinator that provides a convenient way to ignore k, implementing early exit from a sub-block).

For a simple and plausible application, try Edward Z. Yang's blog post implementing labelled break and continue in for loops.

Solution 5 - Haskell

Trying to complement the other answers:

Nested lambdas are horrible for readability. This is exactly the reason why let... in... and ... where ... exist, to get rid of nested lambdas by using intermediate variables. Using those, the bind implementation can be refactored into:

newtype Cont r a = Cont { runCont :: (a -> r) -> r }

instance Monad (Cont r) where
    return a = Cont ($ a)
    m >>= k  = k a
            where a = runCont m id

Which hopefully makes what is happening clearer. The return implementation boxes value with a lazy apply. Using runCont id applies id to the boxed value, which returns the original value.

For any monad where any boxed value can simply be unboxed, there is generally a trivial implementation of bind, which is to simply unbox the value and apply a monadic function to it.

To get the obfuscated implementation in the original question, first replace k a with Cont $ runCont (k a) , which in turn can be replaced with Cont $ \c-> runCont (k a) c

Now, we can move the where into a subexpression, so that we are left with

Cont $ \c-> ( runCont (k a) c where a = runCont m id )

The expression within parentheses can be desugared into \a -> runCont (k a) c $ runCont m id.

To finish, we use the property of runCont, f ( runCont m g) = runCont m (f.g), and we are back to the original obfuscated expression.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionmonbView Question on Stackoverflow
Solution 1 - HaskellC. A. McCannView Answer on Stackoverflow
Solution 2 - HaskellsdcvvcView Answer on Stackoverflow
Solution 3 - HaskellOwen S.View Answer on Stackoverflow
Solution 4 - HaskellBen MillwoodView Answer on Stackoverflow
Solution 5 - HaskellsaolofView Answer on Stackoverflow