It is not hard to reason about big O complexity -- it is just different in a lazy language. A classic reference on this is Okasaki's thesis (and later book) that describes the bankers method (see chapter 3) for accounting for complexity.
In general, one wants a mix of lazy and strict behavior -- some data structures we may want strict in their "spine" but lazy in their elements. Other data structures we may want strict in their elements, but perhaps with selective laziness in their spines potentially infinite structures vectors, for example. Haskell offers laziness with selective strictness, including directly in data constructor definitions. Other languages offer strictness with selective laziness. In general, many find that the lazy-by-default behavior of Haskell has certain advantages, especially with regards to definition of new control structures. See, for example, here (which is in fact written in response to Harper).