I'm using MongoDB for OLTP operations, currently I have 100+ operations/sec. and MongoDB can handle much more. In perfect case you can expect tens of thousands operations per second, but this number is hard to achieve in pratice.
Response time really depends on you replication/write preferences, because MongoDB gives you control over query execution(see CAP theorem). I don't know what you mean about effiency, I can say that insert
operations is efficient enough(don't use update
s for OLTP).
I have no experience with MongoDB security options, because all my web applications have full access to DB and I closed REST-API for public access.
Don't use MongoDB's MapReduce for large datasets, you have to trust me :). This is pain! I found Aggregation Framework suitable for large variety of operations with large datasets(Gb's of date). If it's not your case, try Hadoop's implementation on MapReduce, I don't have such experience, but always want to try.
As an option you may consider Hadoop's HDFS as main storage and something like messagepack as binary format. I heard about such solution.