As I am significantly more familiar with Riak than Cassandra, I will limit my discussion to how eventual consistency applies to Riak.
During normal operation, Riak supports tuneable consistency which allows you to tailor the consistency behaviour to your applications requirements. The default settings are however very sensible and works for most scenarios, in that they require a majority of replicas to respond to a read or write before it can be considered successful.
Although all replicas may not at every point in time be in exactly the same state these consistency settings will ensure that you read what you write. Inconsistency are traditionally corrected on reads through a process called read-repair, but can also be corrected periodically if active anti-entropy (new feature in version 1.3 of Riak) is enabled.
Eventual consistency is otherwise primarily to be considered during various failure scenarios. If e.g. a node gets separated from the rest of the cluster, it will (with default settings) continue to be able to accept writes and reads, which it will serve to the best of its ability depending on which data/replicas it holds. As it can not communicate with the rest of the cluster during this time, inconsistencies may arise. These will however be resolved once the cluster gets back to a state of normal operation. Exactly how long this can take depends on a number of external factors and can range from perhaps fractions of a second for temporary network failures to minutes or hours if manual intervention is required to correct the issue.