Question

Given that all Agda programs terminate, the evaluation strategy doesn't matter for denotational semantics but it does matter for performance (in case you ever run your Agda programs).

So, what evaluation strategy does Agda use? Does using codata (♯, ♭) instead of data change evaluation strategy? Is there a way to force call-by-need aka lazy evaluation?

Was it helpful?

Solution

Type checking might require evaluation to normal form, so it does matter even if you don't run your programs (but yes, evaluation during type checking could be regarded as running the program). Agda evaluates expressions in normal order, which means that functions are applied before their arguments are evaluated. Data types are also evaluated only on demand.

For example, suppose I have this definition of natural numbers and some operations on them:

data ℕ : Set where
  zero : ℕ
  suc  : ℕ → ℕ

{-# BUILTIN NATURAL ℕ #-}
-- Older Agda versions might require you to specify
-- what is zero and what is suc.

infixl 4 _+_
infixl 5 _*_
infixr 6 _^_

_+_ : (m n : ℕ) → ℕ
zero  + n = n
suc m + n = suc (m + n)

_*_ : (m n : ℕ) → ℕ
zero  * n = zero
suc m * n = n + m * n

_^_ : (m n : ℕ) → ℕ
m ^ zero = suc zero
m ^ suc n = m * m ^ n

Since we are working with unary numbers, evaluating 2 ^ 16 will take a fair amount of time. However, if we try to evaluate const 1 (2 ^ 16), it will finish in almost no time at all.

const : ∀ {a b} {A : Set a} {B : Set b} →
  A → B → A
const x _ = x

The same thing goes for data types:

infixr 3 _∷_

data List {a} (A : Set a) : Set a where
  []  : List A
  _∷_ : A → List A → List A

record ⊤ {ℓ} : Set ℓ where

Head : ∀ {a} {A : Set a} → List A → Set _
Head         []      = ⊤
Head {A = A} (_ ∷ _) = A

head : ∀ {a} {A : Set a} (xs : List A) → Head xs
head []      = _
head (x ∷ _) = x

replicate : ∀ {a} {A : Set a} → ℕ → A → List A
replicate 0       _ = []
replicate (suc n) x = x ∷ replicate n x

Again, head (replicate 1000000 1) will evaluate almost instantly.

However, normal order is not call-by-need, namely computations are not shared.

open import Data.Product
open import Relation.Binary.PropositionalEquality

slow : 2 ^ 16 ≡ 65536
slow = refl

slower₁ : (λ x → x , x) (2 ^ 16) ≡ (65536 , 65536)
slower₁ = refl

slower₂ :
  let x : ℕ
      x = 2 ^ 16
  in _≡_ {A = ℕ × ℕ} (x , x) (65536 , 65536)
slower₂ = refl

In this case, type checking slower₁ and slower₂ will take roughly twice the time slow needs. In comparison, call-by-need would share the computation of x and compute 2 ^ 16 just once.


Note that during type checking, you have to evaluate expressions to normal form. If there are any binders around (λ or Π), you have to go under the binder and evaluate the inner expression as well.

λ n → 1 + n   ==>   λ n → suc n

How does codata change the picture? The interaction with reduction is actually fairly straightforward: ♯ x does not evaluate any further unless you apply to it.

This is also why is known as delay and as force.


You can also compile Agda down to Haskell. There's also JavaScript, but I don't know how that is compiled, so I'll stick with compilation to Haskell.

The evaluation strategy is mostly whatever the Haskell compiler uses. As an example, here's what happens to the following definitions:

data ℕ : Set where
  zero : ℕ
  suc  : ℕ → ℕ

_+_ : (m n : ℕ) → ℕ
zero  + n = n
suc m + n = suc (m + n)

data Vec {a} (A : Set a) : ℕ → Set a where
  []  : Vec A zero
  _∷_ : ∀ {n} → A → Vec A n → Vec A (suc n)

And after compilation:

-- ℕ
data T1 a0 = C2
           | C3 a0

-- Vec
data T12 a0 a1 a2 = C15
                  | C17 a0 a1 a2

-- _+_
d6 (C2) v0 = MAlonzo.RTE.mazCoerce v0
d6 v0 v1
  = MAlonzo.RTE.mazCoerce
      (d_1_6 (MAlonzo.RTE.mazCoerce v0) (MAlonzo.RTE.mazCoerce v1))
  where d_1_6 (C3 v0) v1
      = MAlonzo.RTE.mazCoerce
          (C3
             (MAlonzo.RTE.mazCoerce
                (d6 (MAlonzo.RTE.mazCoerce v0) (MAlonzo.RTE.mazCoerce v1))))

Yup, the last one is a bit crazy. But if you squit a bit, you can see:

d6 C2 v0 = v0
d6 (C3 v0) v1 = C3 (d6 v0 v1)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top