Is operator && strict in Haskell?
-
05-02-2021 - |
Pergunta
For example, I have an operation fnB :: a -> Bool
that makes no sense until fnA :: Bool
returns False
. In C I may compose these two operations in one if
block:
if( fnA && fnB(a) ){ doSomething; }
and C will guarantee that fnB
will not execute until fnA
returns false.
But Haskell is lazy, and, generally, there is no guarantee what operation will execute first, until we don't use seq
, $!
, or something else to make our code strict. Generally, this is what we need to be happy. But using &&
operator, I would expect that fnB
will not be evaluated until fnA
returns its result. Does Haskell provide such a guarantee with &&
? And will Haskell evaluate fnB
even when fnA
returns False?
Solução
The function (&&)
is strict in its second argument only if its first argument is True
. It is always strict in its first argument. This strictness / laziness is what guarantees the order of evaluation.
So it behaves exactly like C. The difference is that in Haskell, (&&)
is an ordinary function. In C, this would be impossible.
But Haskell is lazy, and, generally, there are no guarantee what operation will execute first, until we don't use seq, $!, or something else to make our code strict.
This is not correct. The truth is deeper.
Crash course in strictness:
We know (&&)
is strict in its first parameter because:
⊥ && x = ⊥
Here, ⊥ is something like undefined
or an infinite loop (⊥ is pronounced "bottom"). We also know that (False &&)
is non-strict in its second argument:
False && ⊥ = False
It can't possibly evaluate its second argument, because its second argument is ⊥ which can't be evaluated. However, the function (True &&)
is strict in its second argument, because:
True && ⊥ = ⊥
So, we say that (&&)
is always strict in its first argument, and strict in its second argument only when the first argument is True
.
Order of evaluation:
For (&&)
, its strictness properties are enough to guarantee order of execution. That is not always the case. For example, (+) :: Int -> Int -> Int
is always strict in both arguments, so either argument can be evaluated first. However, you can only tell the difference by catching exceptions in the IO
monad, or if you use an unsafe
function.
Outras dicas
As noted by others, naturally (&&)
is strict in one of its arguments. By the standard definition it's strict in its first argument. You can use flip
to flip the semantics.
As an additional note: Note that the arguments to (&&)
cannot have side effects, so there are only two reasons why you would want to care whether x && y
is strict in y
:
- Performance: If
y
takes a long time to compute. - Semantics: If you expect that
y
can be bottom.
Haskell is lazy, and, generally, there is no guarantee what operation will execute first
Not quite. Haskell is pure (except for unsafePerformIO
and the implementation of IO
), and there is no way to observe which operation will execute first (except for unsafePerformIO
and the implementation of IO
). The order of execution simply does not matter for the result.
&&
has a 9-value truth table, including the cases where one or both arguments are undefined
, and it defines the operation exactly:
a b a && b
True True True
True False False
True undefined undefined
False True False
False False False
False undefined False
undefined True undefined
undefined False undefined
undefined undefined undefined
As long as the implementation follows that table, it's allowed to execute things in any order it wants.
(If you study the table, you'll notice that there's no way for a sequential implementation to follow it unless it executes a
first, then b
iff a
is True. But Haskell implementations are not required to be sequential! An implementation is always allowed to kick of execution of b
whenever it wants; your only guarantee is that, according to the table, the result of executing b
can only impact your program when a
is True.)
(Note that 'laziness' is the only way to write a function with a truth table like the above at all; in a language like C or ML, all five of the lines with undefined
in either argument would be force to have undefined
as the result, where in Haskell (and in C, because &&
is built in to the C language) one of the lines can have False
as the result instead.)
I believe it works the way you expect; evaluate the RHS iff the LHS evaluates to True. However, assuming the RHS has no side-effects, how would you know (or care)?
Edit: I guess the RHS could be undefined
, and then you would care...