To answer your questions:
- Yes, I use
operation.db.Put
in my own MapReduce pipeline andndb
models are fine. - No, caching does not seem to interfere with db operations.
- No, it is the same for
db
andndb
. - It could be due to eventual consistency. Since you are iterating over your entities using MapReduce, you are probably not using ancestor queries. Therefore, you cannot be sure to see your entities deleted at once. There could be other factors. See below.
- MapReduce is excellent for batch processing so you are on the right path.
The problem you are having with entities that are seemingly not being deleted could be due to a number of reasons. Here are a few:
- Eventual consistency -- as I mentioned -- such that it only appears that entities are not deleted when they are in fact deleted later.
- MapReduce is not touching all entities. Could be due to a bad filter or namespace in the beginning of the MapReduce pipeline.
- Errors in the pipeline. This should show up in your logs.
- Weird caching issues. To confirm or de-confirm this would require rigorous testing.