I think I would first take a look closely at your database commits. Client vars are retrieved at the "beginning" of a request and then updated at the "end" of the request with any changes. Consider the case where a cflocation happens at the end of a request. Is it possible that the "next" request draws pages from the DB before the update is committed? In a complex system with replication etc situations like this can happen. 6 clustered web servers is a significant number.
As for caching in memory - CF caches the client var in memory and only "reads" from the store (the database) if the client var is changed (if an update has taken place). So you could be onto something there as well. Still, theoretically the client var cached in memory should be identical to the vars in the data store. Since it stores changes - which are also written to the datastore. This is done in real time (as I understand it) i.e. at the end of the request. All the Changes to the client vars are flushed to the DB. So theoretically even with round robin, if your browser showed up on a new server that did NOT have your client vars in memory it would just go get them out of the DB. That's why I think the DB might be the key here. NOTE: the update behavior might changed based on whether global vars are enabled or disabled. Take a look at each server to see if there is any differences in how this setting is used.
As for sticky sessions: if you are using a hardware based load balancer make sure and explore it's balancing options. What you want is to use sticky sessions and for the LB to divide load among the servers. You want it to be smart enough to know about the actual load (CPU usage usually) not just the gross number of requests that have been divided among the servers. good luck. I love problems like this :)